Test Report: Docker_Linux_crio 22021

                    
                      714686ca7bbd77e34d847e892f53d4af2ede556f:2025-12-02:42609
                    
                

Test fail (48/415)

Order failed test Duration
38 TestAddons/serial/Volcano 0.26
44 TestAddons/parallel/Registry 14.15
45 TestAddons/parallel/RegistryCreds 0.45
46 TestAddons/parallel/Ingress 148.4
47 TestAddons/parallel/InspektorGadget 5.32
48 TestAddons/parallel/MetricsServer 5.37
50 TestAddons/parallel/CSI 33.2
51 TestAddons/parallel/Headlamp 2.62
52 TestAddons/parallel/CloudSpanner 6.26
53 TestAddons/parallel/LocalPath 10.15
54 TestAddons/parallel/NvidiaDevicePlugin 5.27
55 TestAddons/parallel/Yakd 5.26
56 TestAddons/parallel/AmdGpuDevicePlugin 5.27
106 TestFunctional/parallel/ServiceCmdConnect 603.06
123 TestFunctional/parallel/ServiceCmd/DeployApp 600.68
141 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.89
142 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.89
143 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.71
144 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.31
146 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.21
147 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.35
161 TestFunctional/parallel/ServiceCmd/HTTPS 0.56
162 TestFunctional/parallel/ServiceCmd/Format 0.58
163 TestFunctional/parallel/ServiceCmd/URL 0.55
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 603.17
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 600.68
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 0.92
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.89
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.8
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.31
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.22
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.38
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.55
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.55
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.55
294 TestJSONOutput/pause/Command 2.08
300 TestJSONOutput/unpause/Command 2.08
366 TestPause/serial/Pause 6.08
452 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.45
454 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.72
462 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.26
464 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.88
475 TestStartStop/group/old-k8s-version/serial/Pause 6.61
477 TestStartStop/group/no-preload/serial/Pause 6.57
484 TestStartStop/group/embed-certs/serial/Pause 6.48
487 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.38
490 TestStartStop/group/default-k8s-diff-port/serial/Pause 6.14
496 TestStartStop/group/newest-cni/serial/Pause 5.35
x
+
TestAddons/serial/Volcano (0.26s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-141726 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-141726 addons disable volcano --alsologtostderr -v=1: exit status 11 (263.395393ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 15:17:24.131979  277668 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:17:24.132227  277668 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:17:24.132236  277668 out.go:374] Setting ErrFile to fd 2...
	I1202 15:17:24.132240  277668 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:17:24.132462  277668 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 15:17:24.132697  277668 mustload.go:66] Loading cluster: addons-141726
	I1202 15:17:24.133019  277668 config.go:182] Loaded profile config "addons-141726": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 15:17:24.133040  277668 addons.go:622] checking whether the cluster is paused
	I1202 15:17:24.133120  277668 config.go:182] Loaded profile config "addons-141726": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 15:17:24.133137  277668 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:17:24.133524  277668 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:17:24.152655  277668 ssh_runner.go:195] Run: systemctl --version
	I1202 15:17:24.152711  277668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:17:24.172745  277668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:17:24.273170  277668 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 15:17:24.273263  277668 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 15:17:24.304094  277668 cri.go:89] found id: "5412cbcb9dad23c931f90a92d80b1e256500b274b20fcff6807ec93ce486087b"
	I1202 15:17:24.304125  277668 cri.go:89] found id: "584adcb3687a901fe5060be0a1fb1600c34509b47bb4484486d6cd5d48c6ffad"
	I1202 15:17:24.304130  277668 cri.go:89] found id: "00fe2f1035e365363b059dcce9f3e0b81a3ad886b8c75acee254b56191e6e863"
	I1202 15:17:24.304134  277668 cri.go:89] found id: "c829c27b2be0ca5be71ec02d7cf4e0c49251f9e60910a0e3454bbcddca4fafcd"
	I1202 15:17:24.304137  277668 cri.go:89] found id: "3307ae3898cde405eec22b3674ace2b05333d6bbcffefc76a3d7305a9043c2e4"
	I1202 15:17:24.304141  277668 cri.go:89] found id: "874bcd460b4cc02e9dad7f167a328196ff5beb9a1012aaf8f5f4f2a393d906b8"
	I1202 15:17:24.304144  277668 cri.go:89] found id: "a087fdb2d51ae5fccdb5086d81846803a6e8c037c736644f24867541b19508e7"
	I1202 15:17:24.304147  277668 cri.go:89] found id: "44579fbc5bf765a529aaedb60d661600810b558ed14b53a2791275b051e41cea"
	I1202 15:17:24.304149  277668 cri.go:89] found id: "398f34ffd447f6b12322f761f0de9ff3970accc3446665dd1322ad321ade5e55"
	I1202 15:17:24.304161  277668 cri.go:89] found id: "51d82f122a46867d50393b13527c7d9616831cbadd2d989162e47bf6ff9995bf"
	I1202 15:17:24.304167  277668 cri.go:89] found id: "1433ec009789d9dece3a29309283858693fe9891c2c0deadf76da3ccde6e4d3a"
	I1202 15:17:24.304172  277668 cri.go:89] found id: "f7167c650b5e128373caeda86412c6686ea43f9fca94eebbea1e6330f1681df7"
	I1202 15:17:24.304177  277668 cri.go:89] found id: "ba208fa7fba7a8aa77172696fd226f8981ccd6cb050eebd1eaf36ffd634ae40b"
	I1202 15:17:24.304182  277668 cri.go:89] found id: "515f2711c25082029bfb73f256ab315837df06ad3ce2e28c43e6ca4f915ff98a"
	I1202 15:17:24.304187  277668 cri.go:89] found id: "db056cf136978dcdc941d292176e5dd0ae726b09d2af66bd9cfe26cd49867515"
	I1202 15:17:24.304199  277668 cri.go:89] found id: "8aef252bd37ce68223492a3d106cdc49f9000fcca914af4aed5855230552d3cf"
	I1202 15:17:24.304207  277668 cri.go:89] found id: "ed6c258d3fc965340e6765fb91d648d86cbd0ec27ffbf12d3f7f75dc84c42fe2"
	I1202 15:17:24.304214  277668 cri.go:89] found id: "62cac40636a5b133526a2d722e6709f00017736dd1cfc7e3133b26af70363e46"
	I1202 15:17:24.304221  277668 cri.go:89] found id: "0d87471c625dcf0f8df21886299158a0c0d136ac58fb47ca1dfde4ddef6434ad"
	I1202 15:17:24.304225  277668 cri.go:89] found id: "ad0c6b77e41e3a16f35df40501b8e69476519899dc85d64c8b2cf07c30b31ce4"
	I1202 15:17:24.304229  277668 cri.go:89] found id: "8ae5e65fa7abba6b7bd24be1ea23cf338ad905d35d5835b7da18a1374d635911"
	I1202 15:17:24.304234  277668 cri.go:89] found id: "d4ee4d2470fd104c16bcc7fe722d5ffc59c1d0a056ffed4d4587e05f1855bf93"
	I1202 15:17:24.304243  277668 cri.go:89] found id: "762c736ec2bae28c970f9e38d7f5c0753e1d54378a7fa586b85636f18b0e547e"
	I1202 15:17:24.304247  277668 cri.go:89] found id: "2ad3385ae6c4074333e7cd6e406cceedb26093169a52ca39f8c4e0168ed2a9eb"
	I1202 15:17:24.304252  277668 cri.go:89] found id: ""
	I1202 15:17:24.304309  277668 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 15:17:24.319156  277668 out.go:203] 
	W1202 15:17:24.320635  277668 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T15:17:24Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T15:17:24Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 15:17:24.320660  277668 out.go:285] * 
	* 
	W1202 15:17:24.323944  277668 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 15:17:24.325231  277668 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-141726 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.26s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.15s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.149543ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-4ndqk" [ca026742-659d-47f4-80ef-ccc67046c4d3] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003104247s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-md75n" [73921430-8808-4e00-888a-b97d19bf02e5] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003476899s
addons_test.go:392: (dbg) Run:  kubectl --context addons-141726 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-141726 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-141726 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.680055297s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-141726 ip
2025/12/02 15:17:49 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-141726 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-141726 addons disable registry --alsologtostderr -v=1: exit status 11 (250.807785ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 15:17:49.135182  280165 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:17:49.135273  280165 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:17:49.135281  280165 out.go:374] Setting ErrFile to fd 2...
	I1202 15:17:49.135286  280165 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:17:49.135506  280165 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 15:17:49.135775  280165 mustload.go:66] Loading cluster: addons-141726
	I1202 15:17:49.136076  280165 config.go:182] Loaded profile config "addons-141726": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 15:17:49.136096  280165 addons.go:622] checking whether the cluster is paused
	I1202 15:17:49.136174  280165 config.go:182] Loaded profile config "addons-141726": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 15:17:49.136190  280165 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:17:49.136559  280165 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:17:49.154368  280165 ssh_runner.go:195] Run: systemctl --version
	I1202 15:17:49.154430  280165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:17:49.172037  280165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:17:49.272278  280165 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 15:17:49.272351  280165 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 15:17:49.305188  280165 cri.go:89] found id: "5412cbcb9dad23c931f90a92d80b1e256500b274b20fcff6807ec93ce486087b"
	I1202 15:17:49.305210  280165 cri.go:89] found id: "584adcb3687a901fe5060be0a1fb1600c34509b47bb4484486d6cd5d48c6ffad"
	I1202 15:17:49.305214  280165 cri.go:89] found id: "00fe2f1035e365363b059dcce9f3e0b81a3ad886b8c75acee254b56191e6e863"
	I1202 15:17:49.305218  280165 cri.go:89] found id: "c829c27b2be0ca5be71ec02d7cf4e0c49251f9e60910a0e3454bbcddca4fafcd"
	I1202 15:17:49.305221  280165 cri.go:89] found id: "3307ae3898cde405eec22b3674ace2b05333d6bbcffefc76a3d7305a9043c2e4"
	I1202 15:17:49.305225  280165 cri.go:89] found id: "874bcd460b4cc02e9dad7f167a328196ff5beb9a1012aaf8f5f4f2a393d906b8"
	I1202 15:17:49.305229  280165 cri.go:89] found id: "a087fdb2d51ae5fccdb5086d81846803a6e8c037c736644f24867541b19508e7"
	I1202 15:17:49.305233  280165 cri.go:89] found id: "44579fbc5bf765a529aaedb60d661600810b558ed14b53a2791275b051e41cea"
	I1202 15:17:49.305237  280165 cri.go:89] found id: "398f34ffd447f6b12322f761f0de9ff3970accc3446665dd1322ad321ade5e55"
	I1202 15:17:49.305247  280165 cri.go:89] found id: "51d82f122a46867d50393b13527c7d9616831cbadd2d989162e47bf6ff9995bf"
	I1202 15:17:49.305252  280165 cri.go:89] found id: "1433ec009789d9dece3a29309283858693fe9891c2c0deadf76da3ccde6e4d3a"
	I1202 15:17:49.305256  280165 cri.go:89] found id: "f7167c650b5e128373caeda86412c6686ea43f9fca94eebbea1e6330f1681df7"
	I1202 15:17:49.305261  280165 cri.go:89] found id: "ba208fa7fba7a8aa77172696fd226f8981ccd6cb050eebd1eaf36ffd634ae40b"
	I1202 15:17:49.305265  280165 cri.go:89] found id: "515f2711c25082029bfb73f256ab315837df06ad3ce2e28c43e6ca4f915ff98a"
	I1202 15:17:49.305269  280165 cri.go:89] found id: "db056cf136978dcdc941d292176e5dd0ae726b09d2af66bd9cfe26cd49867515"
	I1202 15:17:49.305283  280165 cri.go:89] found id: "8aef252bd37ce68223492a3d106cdc49f9000fcca914af4aed5855230552d3cf"
	I1202 15:17:49.305286  280165 cri.go:89] found id: "ed6c258d3fc965340e6765fb91d648d86cbd0ec27ffbf12d3f7f75dc84c42fe2"
	I1202 15:17:49.305290  280165 cri.go:89] found id: "62cac40636a5b133526a2d722e6709f00017736dd1cfc7e3133b26af70363e46"
	I1202 15:17:49.305293  280165 cri.go:89] found id: "0d87471c625dcf0f8df21886299158a0c0d136ac58fb47ca1dfde4ddef6434ad"
	I1202 15:17:49.305296  280165 cri.go:89] found id: "ad0c6b77e41e3a16f35df40501b8e69476519899dc85d64c8b2cf07c30b31ce4"
	I1202 15:17:49.305301  280165 cri.go:89] found id: "8ae5e65fa7abba6b7bd24be1ea23cf338ad905d35d5835b7da18a1374d635911"
	I1202 15:17:49.305303  280165 cri.go:89] found id: "d4ee4d2470fd104c16bcc7fe722d5ffc59c1d0a056ffed4d4587e05f1855bf93"
	I1202 15:17:49.305306  280165 cri.go:89] found id: "762c736ec2bae28c970f9e38d7f5c0753e1d54378a7fa586b85636f18b0e547e"
	I1202 15:17:49.305309  280165 cri.go:89] found id: "2ad3385ae6c4074333e7cd6e406cceedb26093169a52ca39f8c4e0168ed2a9eb"
	I1202 15:17:49.305311  280165 cri.go:89] found id: ""
	I1202 15:17:49.305357  280165 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 15:17:49.319991  280165 out.go:203] 
	W1202 15:17:49.321585  280165 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T15:17:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T15:17:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 15:17:49.321620  280165 out.go:285] * 
	* 
	W1202 15:17:49.325032  280165 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 15:17:49.326395  280165 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-141726 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (14.15s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.45s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.307712ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-141726
addons_test.go:332: (dbg) Run:  kubectl --context addons-141726 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-141726 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-141726 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (257.593327ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 15:17:51.333708  280513 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:17:51.333816  280513 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:17:51.333826  280513 out.go:374] Setting ErrFile to fd 2...
	I1202 15:17:51.333830  280513 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:17:51.334157  280513 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 15:17:51.334540  280513 mustload.go:66] Loading cluster: addons-141726
	I1202 15:17:51.335088  280513 config.go:182] Loaded profile config "addons-141726": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 15:17:51.335121  280513 addons.go:622] checking whether the cluster is paused
	I1202 15:17:51.335254  280513 config.go:182] Loaded profile config "addons-141726": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 15:17:51.335274  280513 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:17:51.335698  280513 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:17:51.354110  280513 ssh_runner.go:195] Run: systemctl --version
	I1202 15:17:51.354169  280513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:17:51.371559  280513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:17:51.472320  280513 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 15:17:51.472459  280513 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 15:17:51.504904  280513 cri.go:89] found id: "5412cbcb9dad23c931f90a92d80b1e256500b274b20fcff6807ec93ce486087b"
	I1202 15:17:51.504926  280513 cri.go:89] found id: "584adcb3687a901fe5060be0a1fb1600c34509b47bb4484486d6cd5d48c6ffad"
	I1202 15:17:51.504930  280513 cri.go:89] found id: "00fe2f1035e365363b059dcce9f3e0b81a3ad886b8c75acee254b56191e6e863"
	I1202 15:17:51.504934  280513 cri.go:89] found id: "c829c27b2be0ca5be71ec02d7cf4e0c49251f9e60910a0e3454bbcddca4fafcd"
	I1202 15:17:51.504936  280513 cri.go:89] found id: "3307ae3898cde405eec22b3674ace2b05333d6bbcffefc76a3d7305a9043c2e4"
	I1202 15:17:51.504940  280513 cri.go:89] found id: "874bcd460b4cc02e9dad7f167a328196ff5beb9a1012aaf8f5f4f2a393d906b8"
	I1202 15:17:51.504943  280513 cri.go:89] found id: "a087fdb2d51ae5fccdb5086d81846803a6e8c037c736644f24867541b19508e7"
	I1202 15:17:51.504945  280513 cri.go:89] found id: "44579fbc5bf765a529aaedb60d661600810b558ed14b53a2791275b051e41cea"
	I1202 15:17:51.504948  280513 cri.go:89] found id: "398f34ffd447f6b12322f761f0de9ff3970accc3446665dd1322ad321ade5e55"
	I1202 15:17:51.504954  280513 cri.go:89] found id: "51d82f122a46867d50393b13527c7d9616831cbadd2d989162e47bf6ff9995bf"
	I1202 15:17:51.504957  280513 cri.go:89] found id: "1433ec009789d9dece3a29309283858693fe9891c2c0deadf76da3ccde6e4d3a"
	I1202 15:17:51.504959  280513 cri.go:89] found id: "f7167c650b5e128373caeda86412c6686ea43f9fca94eebbea1e6330f1681df7"
	I1202 15:17:51.504962  280513 cri.go:89] found id: "ba208fa7fba7a8aa77172696fd226f8981ccd6cb050eebd1eaf36ffd634ae40b"
	I1202 15:17:51.504965  280513 cri.go:89] found id: "515f2711c25082029bfb73f256ab315837df06ad3ce2e28c43e6ca4f915ff98a"
	I1202 15:17:51.504967  280513 cri.go:89] found id: "db056cf136978dcdc941d292176e5dd0ae726b09d2af66bd9cfe26cd49867515"
	I1202 15:17:51.504983  280513 cri.go:89] found id: "8aef252bd37ce68223492a3d106cdc49f9000fcca914af4aed5855230552d3cf"
	I1202 15:17:51.504989  280513 cri.go:89] found id: "ed6c258d3fc965340e6765fb91d648d86cbd0ec27ffbf12d3f7f75dc84c42fe2"
	I1202 15:17:51.504993  280513 cri.go:89] found id: "62cac40636a5b133526a2d722e6709f00017736dd1cfc7e3133b26af70363e46"
	I1202 15:17:51.504996  280513 cri.go:89] found id: "0d87471c625dcf0f8df21886299158a0c0d136ac58fb47ca1dfde4ddef6434ad"
	I1202 15:17:51.504999  280513 cri.go:89] found id: "ad0c6b77e41e3a16f35df40501b8e69476519899dc85d64c8b2cf07c30b31ce4"
	I1202 15:17:51.505005  280513 cri.go:89] found id: "8ae5e65fa7abba6b7bd24be1ea23cf338ad905d35d5835b7da18a1374d635911"
	I1202 15:17:51.505007  280513 cri.go:89] found id: "d4ee4d2470fd104c16bcc7fe722d5ffc59c1d0a056ffed4d4587e05f1855bf93"
	I1202 15:17:51.505010  280513 cri.go:89] found id: "762c736ec2bae28c970f9e38d7f5c0753e1d54378a7fa586b85636f18b0e547e"
	I1202 15:17:51.505012  280513 cri.go:89] found id: "2ad3385ae6c4074333e7cd6e406cceedb26093169a52ca39f8c4e0168ed2a9eb"
	I1202 15:17:51.505015  280513 cri.go:89] found id: ""
	I1202 15:17:51.505064  280513 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 15:17:51.519479  280513 out.go:203] 
	W1202 15:17:51.520602  280513 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T15:17:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T15:17:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 15:17:51.520626  280513 out.go:285] * 
	* 
	W1202 15:17:51.523798  280513 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 15:17:51.525289  280513 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-141726 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.45s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (148.4s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-141726 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-141726 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-141726 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [b7616e0f-62e8-4f8b-b996-3580561050dc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [b7616e0f-62e8-4f8b-b996-3580561050dc] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003627936s
I1202 15:17:57.411648  268099 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-141726 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-141726 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m15.709552751s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-141726 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-141726 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-141726
helpers_test.go:243: (dbg) docker inspect addons-141726:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "128a4d3a45efb39bab8b4a2d7a8e688362fd72fe81f4a04c608da9f1a4dcb058",
	        "Created": "2025-12-02T15:16:04.050874973Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 270528,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T15:16:04.091838148Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/128a4d3a45efb39bab8b4a2d7a8e688362fd72fe81f4a04c608da9f1a4dcb058/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/128a4d3a45efb39bab8b4a2d7a8e688362fd72fe81f4a04c608da9f1a4dcb058/hostname",
	        "HostsPath": "/var/lib/docker/containers/128a4d3a45efb39bab8b4a2d7a8e688362fd72fe81f4a04c608da9f1a4dcb058/hosts",
	        "LogPath": "/var/lib/docker/containers/128a4d3a45efb39bab8b4a2d7a8e688362fd72fe81f4a04c608da9f1a4dcb058/128a4d3a45efb39bab8b4a2d7a8e688362fd72fe81f4a04c608da9f1a4dcb058-json.log",
	        "Name": "/addons-141726",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-141726:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-141726",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "128a4d3a45efb39bab8b4a2d7a8e688362fd72fe81f4a04c608da9f1a4dcb058",
	                "LowerDir": "/var/lib/docker/overlay2/58bbd9985dadadf3d6595010c73fc5198a3bfe6d0d3000d27fa89fa52c5738c5-init/diff:/var/lib/docker/overlay2/ab98578cee54140c21ba2edb7c02601b9799fbaa027f05ce4daaae66d198c082/diff",
	                "MergedDir": "/var/lib/docker/overlay2/58bbd9985dadadf3d6595010c73fc5198a3bfe6d0d3000d27fa89fa52c5738c5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/58bbd9985dadadf3d6595010c73fc5198a3bfe6d0d3000d27fa89fa52c5738c5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/58bbd9985dadadf3d6595010c73fc5198a3bfe6d0d3000d27fa89fa52c5738c5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-141726",
	                "Source": "/var/lib/docker/volumes/addons-141726/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-141726",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-141726",
	                "name.minikube.sigs.k8s.io": "addons-141726",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "79bbe823923829093b02d1fcb315c9f6d3c1fd95b694701f6715b7dd48ef5778",
	            "SandboxKey": "/var/run/docker/netns/79bbe8239238",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32888"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32889"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32892"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32890"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32891"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-141726": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e41e36b0a1e82b35432a130dea23fec5397aed2b06197e08a06740fba19835d3",
	                    "EndpointID": "7339b321bd5eb17037d4cb7c4aaf082ed99117a37312af2842c7b3d314c98b7d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "be:c8:fa:e1:8b:2b",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-141726",
	                        "128a4d3a45ef"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-141726 -n addons-141726
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-141726 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-141726 logs -n 25: (1.178393942s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-703402 --alsologtostderr --binary-mirror http://127.0.0.1:46397 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-703402 │ jenkins │ v1.37.0 │ 02 Dec 25 15:15 UTC │                     │
	│ delete  │ -p binary-mirror-703402                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-703402 │ jenkins │ v1.37.0 │ 02 Dec 25 15:15 UTC │ 02 Dec 25 15:15 UTC │
	│ addons  │ disable dashboard -p addons-141726                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-141726        │ jenkins │ v1.37.0 │ 02 Dec 25 15:15 UTC │                     │
	│ addons  │ enable dashboard -p addons-141726                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-141726        │ jenkins │ v1.37.0 │ 02 Dec 25 15:15 UTC │                     │
	│ start   │ -p addons-141726 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-141726        │ jenkins │ v1.37.0 │ 02 Dec 25 15:15 UTC │ 02 Dec 25 15:17 UTC │
	│ addons  │ addons-141726 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-141726        │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │                     │
	│ addons  │ addons-141726 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-141726        │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │                     │
	│ addons  │ enable headlamp -p addons-141726 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-141726        │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │                     │
	│ addons  │ addons-141726 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-141726        │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │                     │
	│ addons  │ addons-141726 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-141726        │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │                     │
	│ addons  │ addons-141726 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-141726        │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │                     │
	│ addons  │ addons-141726 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-141726        │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │                     │
	│ addons  │ addons-141726 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-141726        │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │                     │
	│ ssh     │ addons-141726 ssh cat /opt/local-path-provisioner/pvc-3465dce8-839e-40a0-b246-a6443acf23da_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-141726        │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ addons  │ addons-141726 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-141726        │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │                     │
	│ ip      │ addons-141726 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-141726        │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ addons  │ addons-141726 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-141726        │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │                     │
	│ addons  │ addons-141726 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-141726        │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-141726                                                                                                                                                                                                                                                                                                                                                                                           │ addons-141726        │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ addons  │ addons-141726 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-141726        │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │                     │
	│ addons  │ addons-141726 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-141726        │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │                     │
	│ ssh     │ addons-141726 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-141726        │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │                     │
	│ addons  │ addons-141726 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-141726        │ jenkins │ v1.37.0 │ 02 Dec 25 15:18 UTC │                     │
	│ addons  │ addons-141726 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-141726        │ jenkins │ v1.37.0 │ 02 Dec 25 15:18 UTC │                     │
	│ ip      │ addons-141726 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-141726        │ jenkins │ v1.37.0 │ 02 Dec 25 15:20 UTC │ 02 Dec 25 15:20 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 15:15:42
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 15:15:42.428066  269889 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:15:42.428361  269889 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:15:42.428373  269889 out.go:374] Setting ErrFile to fd 2...
	I1202 15:15:42.428378  269889 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:15:42.428620  269889 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 15:15:42.429212  269889 out.go:368] Setting JSON to false
	I1202 15:15:42.430229  269889 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7083,"bootTime":1764681459,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 15:15:42.430301  269889 start.go:143] virtualization: kvm guest
	I1202 15:15:42.432355  269889 out.go:179] * [addons-141726] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 15:15:42.433715  269889 notify.go:221] Checking for updates...
	I1202 15:15:42.433727  269889 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 15:15:42.435025  269889 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 15:15:42.436436  269889 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 15:15:42.437579  269889 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-264555/.minikube
	I1202 15:15:42.438837  269889 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 15:15:42.440376  269889 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 15:15:42.441905  269889 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 15:15:42.465312  269889 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 15:15:42.465521  269889 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:15:42.526757  269889 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:46 SystemTime:2025-12-02 15:15:42.516398623 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:15:42.526871  269889 docker.go:319] overlay module found
	I1202 15:15:42.528672  269889 out.go:179] * Using the docker driver based on user configuration
	I1202 15:15:42.529946  269889 start.go:309] selected driver: docker
	I1202 15:15:42.529968  269889 start.go:927] validating driver "docker" against <nil>
	I1202 15:15:42.529987  269889 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 15:15:42.530509  269889 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:15:42.593812  269889 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:46 SystemTime:2025-12-02 15:15:42.583386999 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:15:42.594040  269889 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 15:15:42.594276  269889 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 15:15:42.595892  269889 out.go:179] * Using Docker driver with root privileges
	I1202 15:15:42.596777  269889 cni.go:84] Creating CNI manager for ""
	I1202 15:15:42.596844  269889 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 15:15:42.596857  269889 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 15:15:42.596931  269889 start.go:353] cluster config:
	{Name:addons-141726 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-141726 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1202 15:15:42.598015  269889 out.go:179] * Starting "addons-141726" primary control-plane node in "addons-141726" cluster
	I1202 15:15:42.598965  269889 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 15:15:42.600084  269889 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 15:15:42.601105  269889 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 15:15:42.601148  269889 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22021-264555/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1202 15:15:42.601144  269889 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 15:15:42.601179  269889 cache.go:65] Caching tarball of preloaded images
	I1202 15:15:42.601459  269889 preload.go:238] Found /home/jenkins/minikube-integration/22021-264555/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 15:15:42.601476  269889 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 15:15:42.601820  269889 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/config.json ...
	I1202 15:15:42.601853  269889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/config.json: {Name:mk2f435c1f3622184bd17cd188725050f114eedb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 15:15:42.620229  269889 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b to local cache
	I1202 15:15:42.620356  269889 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory
	I1202 15:15:42.620372  269889 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory, skipping pull
	I1202 15:15:42.620377  269889 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in cache, skipping pull
	I1202 15:15:42.620387  269889 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b as a tarball
	I1202 15:15:42.620392  269889 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b from local cache
	I1202 15:15:56.007134  269889 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b from cached tarball
	I1202 15:15:56.007191  269889 cache.go:243] Successfully downloaded all kic artifacts
	I1202 15:15:56.007244  269889 start.go:360] acquireMachinesLock for addons-141726: {Name:mk4ed9ed1d49aa4c0786fb49dc3ee4a34ea8161e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 15:15:56.007362  269889 start.go:364] duration metric: took 91.547µs to acquireMachinesLock for "addons-141726"
	I1202 15:15:56.007395  269889 start.go:93] Provisioning new machine with config: &{Name:addons-141726 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-141726 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 15:15:56.007509  269889 start.go:125] createHost starting for "" (driver="docker")
	I1202 15:15:56.009323  269889 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1202 15:15:56.009577  269889 start.go:159] libmachine.API.Create for "addons-141726" (driver="docker")
	I1202 15:15:56.009609  269889 client.go:173] LocalClient.Create starting
	I1202 15:15:56.009832  269889 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem
	I1202 15:15:56.059961  269889 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem
	I1202 15:15:56.113136  269889 cli_runner.go:164] Run: docker network inspect addons-141726 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1202 15:15:56.130624  269889 cli_runner.go:211] docker network inspect addons-141726 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1202 15:15:56.130706  269889 network_create.go:284] running [docker network inspect addons-141726] to gather additional debugging logs...
	I1202 15:15:56.130725  269889 cli_runner.go:164] Run: docker network inspect addons-141726
	W1202 15:15:56.147310  269889 cli_runner.go:211] docker network inspect addons-141726 returned with exit code 1
	I1202 15:15:56.147339  269889 network_create.go:287] error running [docker network inspect addons-141726]: docker network inspect addons-141726: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-141726 not found
	I1202 15:15:56.147354  269889 network_create.go:289] output of [docker network inspect addons-141726]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-141726 not found
	
	** /stderr **
	I1202 15:15:56.147479  269889 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 15:15:56.164293  269889 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0014a9700}
	I1202 15:15:56.164346  269889 network_create.go:124] attempt to create docker network addons-141726 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1202 15:15:56.164393  269889 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-141726 addons-141726
	I1202 15:15:56.213302  269889 network_create.go:108] docker network addons-141726 192.168.49.0/24 created
	I1202 15:15:56.213341  269889 kic.go:121] calculated static IP "192.168.49.2" for the "addons-141726" container
	I1202 15:15:56.213413  269889 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1202 15:15:56.229680  269889 cli_runner.go:164] Run: docker volume create addons-141726 --label name.minikube.sigs.k8s.io=addons-141726 --label created_by.minikube.sigs.k8s.io=true
	I1202 15:15:56.247956  269889 oci.go:103] Successfully created a docker volume addons-141726
	I1202 15:15:56.248062  269889 cli_runner.go:164] Run: docker run --rm --name addons-141726-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-141726 --entrypoint /usr/bin/test -v addons-141726:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1202 15:16:00.197128  269889 cli_runner.go:217] Completed: docker run --rm --name addons-141726-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-141726 --entrypoint /usr/bin/test -v addons-141726:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib: (3.949013122s)
	I1202 15:16:00.197164  269889 oci.go:107] Successfully prepared a docker volume addons-141726
	I1202 15:16:00.197202  269889 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 15:16:00.197215  269889 kic.go:194] Starting extracting preloaded images to volume ...
	I1202 15:16:00.197270  269889 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22021-264555/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-141726:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	I1202 15:16:03.974465  269889 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22021-264555/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-141726:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (3.777130036s)
	I1202 15:16:03.974496  269889 kic.go:203] duration metric: took 3.777278351s to extract preloaded images to volume ...
	W1202 15:16:03.974594  269889 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1202 15:16:03.974635  269889 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1202 15:16:03.974709  269889 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1202 15:16:04.034948  269889 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-141726 --name addons-141726 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-141726 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-141726 --network addons-141726 --ip 192.168.49.2 --volume addons-141726:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1202 15:16:04.312594  269889 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Running}}
	I1202 15:16:04.330854  269889 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:16:04.349814  269889 cli_runner.go:164] Run: docker exec addons-141726 stat /var/lib/dpkg/alternatives/iptables
	I1202 15:16:04.404810  269889 oci.go:144] the created container "addons-141726" has a running status.
	I1202 15:16:04.404842  269889 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa...
	I1202 15:16:04.486749  269889 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1202 15:16:04.510326  269889 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:16:04.529996  269889 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1202 15:16:04.530022  269889 kic_runner.go:114] Args: [docker exec --privileged addons-141726 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1202 15:16:04.589083  269889 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:16:04.614601  269889 machine.go:94] provisionDockerMachine start ...
	I1202 15:16:04.614710  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:04.639186  269889 main.go:143] libmachine: Using SSH client type: native
	I1202 15:16:04.639921  269889 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1202 15:16:04.639946  269889 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 15:16:04.640717  269889 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45500->127.0.0.1:32888: read: connection reset by peer
	I1202 15:16:07.780766  269889 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-141726
	
	I1202 15:16:07.780795  269889 ubuntu.go:182] provisioning hostname "addons-141726"
	I1202 15:16:07.780862  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:07.799055  269889 main.go:143] libmachine: Using SSH client type: native
	I1202 15:16:07.799306  269889 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1202 15:16:07.799322  269889 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-141726 && echo "addons-141726" | sudo tee /etc/hostname
	I1202 15:16:07.949586  269889 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-141726
	
	I1202 15:16:07.949678  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:07.970881  269889 main.go:143] libmachine: Using SSH client type: native
	I1202 15:16:07.971087  269889 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1202 15:16:07.971102  269889 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-141726' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-141726/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-141726' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 15:16:08.112359  269889 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 15:16:08.112394  269889 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-264555/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-264555/.minikube}
	I1202 15:16:08.112461  269889 ubuntu.go:190] setting up certificates
	I1202 15:16:08.112476  269889 provision.go:84] configureAuth start
	I1202 15:16:08.112537  269889 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-141726
	I1202 15:16:08.130395  269889 provision.go:143] copyHostCerts
	I1202 15:16:08.130501  269889 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem (1082 bytes)
	I1202 15:16:08.130639  269889 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem (1123 bytes)
	I1202 15:16:08.130699  269889 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem (1675 bytes)
	I1202 15:16:08.130752  269889 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem org=jenkins.addons-141726 san=[127.0.0.1 192.168.49.2 addons-141726 localhost minikube]
	I1202 15:16:08.211091  269889 provision.go:177] copyRemoteCerts
	I1202 15:16:08.211154  269889 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 15:16:08.211186  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:08.229442  269889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:16:08.329792  269889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 15:16:08.349287  269889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1202 15:16:08.366869  269889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 15:16:08.384943  269889 provision.go:87] duration metric: took 272.449321ms to configureAuth
	I1202 15:16:08.384989  269889 ubuntu.go:206] setting minikube options for container-runtime
	I1202 15:16:08.385178  269889 config.go:182] Loaded profile config "addons-141726": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 15:16:08.385297  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:08.403552  269889 main.go:143] libmachine: Using SSH client type: native
	I1202 15:16:08.403764  269889 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1202 15:16:08.403779  269889 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 15:16:08.688245  269889 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 15:16:08.688267  269889 machine.go:97] duration metric: took 4.073634043s to provisionDockerMachine
	I1202 15:16:08.688279  269889 client.go:176] duration metric: took 12.678663098s to LocalClient.Create
	I1202 15:16:08.688302  269889 start.go:167] duration metric: took 12.6787275s to libmachine.API.Create "addons-141726"
	I1202 15:16:08.688312  269889 start.go:293] postStartSetup for "addons-141726" (driver="docker")
	I1202 15:16:08.688324  269889 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 15:16:08.688380  269889 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 15:16:08.688466  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:08.705418  269889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:16:08.806557  269889 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 15:16:08.810168  269889 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 15:16:08.810202  269889 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 15:16:08.810215  269889 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-264555/.minikube/addons for local assets ...
	I1202 15:16:08.810277  269889 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-264555/.minikube/files for local assets ...
	I1202 15:16:08.810305  269889 start.go:296] duration metric: took 121.985443ms for postStartSetup
	I1202 15:16:08.810594  269889 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-141726
	I1202 15:16:08.828030  269889 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/config.json ...
	I1202 15:16:08.828330  269889 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 15:16:08.828383  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:08.845605  269889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:16:08.941665  269889 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 15:16:08.946334  269889 start.go:128] duration metric: took 12.938805567s to createHost
	I1202 15:16:08.946363  269889 start.go:83] releasing machines lock for "addons-141726", held for 12.938985615s
	I1202 15:16:08.946447  269889 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-141726
	I1202 15:16:08.963736  269889 ssh_runner.go:195] Run: cat /version.json
	I1202 15:16:08.963795  269889 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 15:16:08.963818  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:08.963875  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:08.981958  269889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:16:08.982304  269889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:16:09.133783  269889 ssh_runner.go:195] Run: systemctl --version
	I1202 15:16:09.140160  269889 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 15:16:09.173773  269889 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 15:16:09.178307  269889 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 15:16:09.178381  269889 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 15:16:09.203987  269889 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1202 15:16:09.204018  269889 start.go:496] detecting cgroup driver to use...
	I1202 15:16:09.204060  269889 detect.go:190] detected "systemd" cgroup driver on host os
	I1202 15:16:09.204113  269889 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 15:16:09.219342  269889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 15:16:09.231095  269889 docker.go:218] disabling cri-docker service (if available) ...
	I1202 15:16:09.231171  269889 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 15:16:09.247618  269889 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 15:16:09.264259  269889 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 15:16:09.342378  269889 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 15:16:09.425692  269889 docker.go:234] disabling docker service ...
	I1202 15:16:09.425769  269889 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 15:16:09.444094  269889 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 15:16:09.456339  269889 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 15:16:09.534038  269889 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 15:16:09.615344  269889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 15:16:09.627704  269889 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 15:16:09.641748  269889 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 15:16:09.641813  269889 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 15:16:09.651822  269889 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1202 15:16:09.651904  269889 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 15:16:09.660790  269889 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 15:16:09.669710  269889 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 15:16:09.678305  269889 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 15:16:09.686257  269889 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 15:16:09.694707  269889 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 15:16:09.707857  269889 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 15:16:09.716595  269889 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 15:16:09.723878  269889 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 15:16:09.731123  269889 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 15:16:09.809831  269889 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 15:16:09.942488  269889 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 15:16:09.942578  269889 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 15:16:09.946778  269889 start.go:564] Will wait 60s for crictl version
	I1202 15:16:09.946831  269889 ssh_runner.go:195] Run: which crictl
	I1202 15:16:09.950567  269889 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 15:16:09.975040  269889 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 15:16:09.975141  269889 ssh_runner.go:195] Run: crio --version
	I1202 15:16:10.003092  269889 ssh_runner.go:195] Run: crio --version
	I1202 15:16:10.031881  269889 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 15:16:10.033172  269889 cli_runner.go:164] Run: docker network inspect addons-141726 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 15:16:10.051017  269889 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 15:16:10.055129  269889 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 15:16:10.065608  269889 kubeadm.go:884] updating cluster {Name:addons-141726 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-141726 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 15:16:10.065720  269889 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 15:16:10.065771  269889 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 15:16:10.096631  269889 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 15:16:10.096652  269889 crio.go:433] Images already preloaded, skipping extraction
	I1202 15:16:10.096700  269889 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 15:16:10.121994  269889 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 15:16:10.122014  269889 cache_images.go:86] Images are preloaded, skipping loading
	I1202 15:16:10.122022  269889 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1202 15:16:10.122136  269889 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-141726 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-141726 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 15:16:10.122213  269889 ssh_runner.go:195] Run: crio config
	I1202 15:16:10.166938  269889 cni.go:84] Creating CNI manager for ""
	I1202 15:16:10.166959  269889 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 15:16:10.166984  269889 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 15:16:10.167014  269889 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-141726 NodeName:addons-141726 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 15:16:10.167165  269889 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-141726"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 15:16:10.167245  269889 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 15:16:10.176528  269889 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 15:16:10.176597  269889 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 15:16:10.185481  269889 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1202 15:16:10.198193  269889 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 15:16:10.213766  269889 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1202 15:16:10.226652  269889 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1202 15:16:10.230447  269889 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 15:16:10.240927  269889 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 15:16:10.327552  269889 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 15:16:10.350931  269889 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726 for IP: 192.168.49.2
	I1202 15:16:10.350961  269889 certs.go:195] generating shared ca certs ...
	I1202 15:16:10.350983  269889 certs.go:227] acquiring lock for ca certs: {Name:mk039ff27816ff98157f54038cc23b17e408fc34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 15:16:10.351126  269889 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-264555/.minikube/ca.key
	I1202 15:16:10.434776  269889 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt ...
	I1202 15:16:10.434809  269889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt: {Name:mk7e072649a4b4c569a833f8cebcc046fa9ba225 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 15:16:10.434995  269889 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-264555/.minikube/ca.key ...
	I1202 15:16:10.435007  269889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/ca.key: {Name:mkd33308f48f06be4f494f9449310e44e1344a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 15:16:10.435093  269889 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.key
	I1202 15:16:10.521841  269889 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.crt ...
	I1202 15:16:10.521874  269889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.crt: {Name:mk8e36d0ab1ab4663173c4b721b0d09b33ed1a71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 15:16:10.522045  269889 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.key ...
	I1202 15:16:10.522056  269889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.key: {Name:mkc7e1042aa3770527969456bd36137ed55e29d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 15:16:10.522132  269889 certs.go:257] generating profile certs ...
	I1202 15:16:10.522193  269889 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/client.key
	I1202 15:16:10.522212  269889 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/client.crt with IP's: []
	I1202 15:16:10.640012  269889 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/client.crt ...
	I1202 15:16:10.640043  269889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/client.crt: {Name:mk59530b14c997590b1fec6c9d583f6576bd969a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 15:16:10.640211  269889 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/client.key ...
	I1202 15:16:10.640222  269889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/client.key: {Name:mk4ef8aa8a6edc1eef7da9e6cf38f0ff677d947e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 15:16:10.640294  269889 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/apiserver.key.875445ec
	I1202 15:16:10.640314  269889 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/apiserver.crt.875445ec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1202 15:16:10.784072  269889 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/apiserver.crt.875445ec ...
	I1202 15:16:10.784101  269889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/apiserver.crt.875445ec: {Name:mk5441da37da4bbc8e91e551854ac1e8a407c404 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 15:16:10.784323  269889 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/apiserver.key.875445ec ...
	I1202 15:16:10.784347  269889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/apiserver.key.875445ec: {Name:mk0bf01907c5401f83f3a079e735493d65e19e61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 15:16:10.784476  269889 certs.go:382] copying /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/apiserver.crt.875445ec -> /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/apiserver.crt
	I1202 15:16:10.784579  269889 certs.go:386] copying /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/apiserver.key.875445ec -> /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/apiserver.key
	I1202 15:16:10.784652  269889 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/proxy-client.key
	I1202 15:16:10.784673  269889 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/proxy-client.crt with IP's: []
	I1202 15:16:10.885306  269889 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/proxy-client.crt ...
	I1202 15:16:10.885338  269889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/proxy-client.crt: {Name:mkd8d30684698f4678aeb27ef0d90c15b8ca24ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 15:16:10.885574  269889 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/proxy-client.key ...
	I1202 15:16:10.885595  269889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/proxy-client.key: {Name:mkfe613656e416c1b4f650e11394388c60c12cb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 15:16:10.885841  269889 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem (1679 bytes)
	I1202 15:16:10.885893  269889 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem (1082 bytes)
	I1202 15:16:10.885936  269889 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem (1123 bytes)
	I1202 15:16:10.885969  269889 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem (1675 bytes)
	I1202 15:16:10.886631  269889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 15:16:10.904702  269889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 15:16:10.922395  269889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 15:16:10.940727  269889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 15:16:10.960205  269889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1202 15:16:10.978600  269889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 15:16:10.995821  269889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 15:16:11.012659  269889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 15:16:11.029955  269889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 15:16:11.051449  269889 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 15:16:11.063882  269889 ssh_runner.go:195] Run: openssl version
	I1202 15:16:11.070008  269889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 15:16:11.080952  269889 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 15:16:11.084618  269889 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 15:16 /usr/share/ca-certificates/minikubeCA.pem
	I1202 15:16:11.084681  269889 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 15:16:11.118315  269889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 15:16:11.127021  269889 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 15:16:11.130652  269889 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 15:16:11.130715  269889 kubeadm.go:401] StartCluster: {Name:addons-141726 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-141726 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 15:16:11.130794  269889 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 15:16:11.130926  269889 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 15:16:11.157006  269889 cri.go:89] found id: ""
	I1202 15:16:11.157071  269889 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 15:16:11.164986  269889 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 15:16:11.172945  269889 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 15:16:11.173040  269889 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 15:16:11.180871  269889 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 15:16:11.180902  269889 kubeadm.go:158] found existing configuration files:
	
	I1202 15:16:11.180944  269889 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 15:16:11.188664  269889 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 15:16:11.188712  269889 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 15:16:11.196461  269889 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 15:16:11.204368  269889 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 15:16:11.204471  269889 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 15:16:11.211941  269889 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 15:16:11.219342  269889 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 15:16:11.219410  269889 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 15:16:11.226584  269889 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 15:16:11.234248  269889 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 15:16:11.234307  269889 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 15:16:11.241835  269889 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 15:16:11.286140  269889 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1202 15:16:11.286223  269889 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 15:16:11.307924  269889 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 15:16:11.307997  269889 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1202 15:16:11.308071  269889 kubeadm.go:319] OS: Linux
	I1202 15:16:11.308137  269889 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 15:16:11.308197  269889 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 15:16:11.308248  269889 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 15:16:11.308327  269889 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 15:16:11.308390  269889 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 15:16:11.308476  269889 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 15:16:11.308563  269889 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 15:16:11.308627  269889 kubeadm.go:319] CGROUPS_IO: enabled
	I1202 15:16:11.363016  269889 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 15:16:11.363157  269889 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 15:16:11.363319  269889 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 15:16:11.371299  269889 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 15:16:11.373444  269889 out.go:252]   - Generating certificates and keys ...
	I1202 15:16:11.373521  269889 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 15:16:11.373643  269889 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 15:16:11.669716  269889 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1202 15:16:11.748093  269889 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1202 15:16:11.958284  269889 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1202 15:16:12.137902  269889 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1202 15:16:12.322477  269889 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1202 15:16:12.322595  269889 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-141726 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1202 15:16:12.541408  269889 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1202 15:16:12.541588  269889 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-141726 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1202 15:16:12.878840  269889 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1202 15:16:13.043898  269889 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1202 15:16:13.276375  269889 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1202 15:16:13.276480  269889 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 15:16:13.428903  269889 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 15:16:13.520532  269889 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 15:16:13.637332  269889 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 15:16:13.858478  269889 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 15:16:14.150603  269889 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 15:16:14.150979  269889 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 15:16:14.155559  269889 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 15:16:14.157329  269889 out.go:252]   - Booting up control plane ...
	I1202 15:16:14.157415  269889 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 15:16:14.157506  269889 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 15:16:14.158097  269889 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 15:16:14.186867  269889 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 15:16:14.187021  269889 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 15:16:14.194003  269889 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 15:16:14.194203  269889 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 15:16:14.194257  269889 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 15:16:14.292102  269889 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 15:16:14.292230  269889 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 15:16:15.293712  269889 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001627687s
	I1202 15:16:15.297823  269889 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1202 15:16:15.297981  269889 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1202 15:16:15.298284  269889 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1202 15:16:15.298444  269889 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1202 15:16:16.393313  269889 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.095371752s
	I1202 15:16:17.126996  269889 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.829195864s
	I1202 15:16:18.799530  269889 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501617286s
	I1202 15:16:18.815844  269889 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1202 15:16:18.825209  269889 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1202 15:16:18.833668  269889 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1202 15:16:18.833966  269889 kubeadm.go:319] [mark-control-plane] Marking the node addons-141726 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1202 15:16:18.841085  269889 kubeadm.go:319] [bootstrap-token] Using token: 194opl.hhk7qv810vcwb7dj
	I1202 15:16:18.842488  269889 out.go:252]   - Configuring RBAC rules ...
	I1202 15:16:18.842652  269889 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1202 15:16:18.846110  269889 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1202 15:16:18.853024  269889 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1202 15:16:18.855488  269889 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1202 15:16:18.857997  269889 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1202 15:16:18.860085  269889 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1202 15:16:19.204981  269889 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1202 15:16:19.620725  269889 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1202 15:16:20.205212  269889 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1202 15:16:20.206173  269889 kubeadm.go:319] 
	I1202 15:16:20.206288  269889 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1202 15:16:20.206322  269889 kubeadm.go:319] 
	I1202 15:16:20.206458  269889 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1202 15:16:20.206468  269889 kubeadm.go:319] 
	I1202 15:16:20.206501  269889 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1202 15:16:20.206588  269889 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1202 15:16:20.206673  269889 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1202 15:16:20.206689  269889 kubeadm.go:319] 
	I1202 15:16:20.206768  269889 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1202 15:16:20.206782  269889 kubeadm.go:319] 
	I1202 15:16:20.206959  269889 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1202 15:16:20.206980  269889 kubeadm.go:319] 
	I1202 15:16:20.207047  269889 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1202 15:16:20.207121  269889 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1202 15:16:20.207177  269889 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1202 15:16:20.207189  269889 kubeadm.go:319] 
	I1202 15:16:20.207310  269889 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1202 15:16:20.207447  269889 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1202 15:16:20.207458  269889 kubeadm.go:319] 
	I1202 15:16:20.207593  269889 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 194opl.hhk7qv810vcwb7dj \
	I1202 15:16:20.207759  269889 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a700026e2fe1634919809d9050f2aa4b3e0ccbee543d4881e1cd695d56e7eef6 \
	I1202 15:16:20.207797  269889 kubeadm.go:319] 	--control-plane 
	I1202 15:16:20.207807  269889 kubeadm.go:319] 
	I1202 15:16:20.207905  269889 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1202 15:16:20.207921  269889 kubeadm.go:319] 
	I1202 15:16:20.207990  269889 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 194opl.hhk7qv810vcwb7dj \
	I1202 15:16:20.208087  269889 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a700026e2fe1634919809d9050f2aa4b3e0ccbee543d4881e1cd695d56e7eef6 
	I1202 15:16:20.209957  269889 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1202 15:16:20.210078  269889 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 15:16:20.210109  269889 cni.go:84] Creating CNI manager for ""
	I1202 15:16:20.210119  269889 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 15:16:20.212727  269889 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1202 15:16:20.214134  269889 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1202 15:16:20.218443  269889 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1202 15:16:20.218464  269889 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1202 15:16:20.231417  269889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1202 15:16:20.436290  269889 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1202 15:16:20.436374  269889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 15:16:20.436397  269889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-141726 minikube.k8s.io/updated_at=2025_12_02T15_16_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689 minikube.k8s.io/name=addons-141726 minikube.k8s.io/primary=true
	I1202 15:16:20.448517  269889 ops.go:34] apiserver oom_adj: -16
	I1202 15:16:20.514034  269889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 15:16:21.014028  269889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 15:16:21.514656  269889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 15:16:22.014917  269889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 15:16:22.514351  269889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 15:16:23.014818  269889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 15:16:23.514831  269889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 15:16:24.014961  269889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 15:16:24.514982  269889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 15:16:24.577263  269889 kubeadm.go:1114] duration metric: took 4.14095823s to wait for elevateKubeSystemPrivileges
	I1202 15:16:24.577307  269889 kubeadm.go:403] duration metric: took 13.446595269s to StartCluster
	I1202 15:16:24.577338  269889 settings.go:142] acquiring lock: {Name:mkb00b5395affa5a80ee09f21cfed53b1afcd59c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 15:16:24.577507  269889 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 15:16:24.577944  269889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/kubeconfig: {Name:mk809d3f43352510256b48d000241cc8ee13f80d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 15:16:24.578121  269889 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1202 15:16:24.578154  269889 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 15:16:24.578210  269889 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1202 15:16:24.578342  269889 addons.go:70] Setting yakd=true in profile "addons-141726"
	I1202 15:16:24.578362  269889 addons.go:239] Setting addon yakd=true in "addons-141726"
	I1202 15:16:24.578359  269889 config.go:182] Loaded profile config "addons-141726": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 15:16:24.578370  269889 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-141726"
	I1202 15:16:24.578384  269889 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-141726"
	I1202 15:16:24.578389  269889 addons.go:70] Setting registry-creds=true in profile "addons-141726"
	I1202 15:16:24.578411  269889 addons.go:70] Setting default-storageclass=true in profile "addons-141726"
	I1202 15:16:24.578407  269889 addons.go:70] Setting volcano=true in profile "addons-141726"
	I1202 15:16:24.578434  269889 addons.go:70] Setting volumesnapshots=true in profile "addons-141726"
	I1202 15:16:24.578437  269889 addons.go:70] Setting gcp-auth=true in profile "addons-141726"
	I1202 15:16:24.578440  269889 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-141726"
	I1202 15:16:24.578407  269889 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-141726"
	I1202 15:16:24.578446  269889 addons.go:239] Setting addon volcano=true in "addons-141726"
	I1202 15:16:24.578450  269889 addons.go:239] Setting addon volumesnapshots=true in "addons-141726"
	I1202 15:16:24.578451  269889 addons.go:70] Setting metrics-server=true in profile "addons-141726"
	I1202 15:16:24.578456  269889 mustload.go:66] Loading cluster: addons-141726
	I1202 15:16:24.578460  269889 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-141726"
	I1202 15:16:24.578465  269889 addons.go:239] Setting addon metrics-server=true in "addons-141726"
	I1202 15:16:24.578467  269889 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:16:24.578497  269889 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:16:24.578507  269889 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:16:24.578609  269889 config.go:182] Loaded profile config "addons-141726": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 15:16:24.578798  269889 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:16:24.578841  269889 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:16:24.578857  269889 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:16:24.578958  269889 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:16:24.578968  269889 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:16:24.578982  269889 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:16:24.579061  269889 addons.go:70] Setting ingress=true in profile "addons-141726"
	I1202 15:16:24.579084  269889 addons.go:239] Setting addon ingress=true in "addons-141726"
	I1202 15:16:24.579124  269889 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:16:24.579221  269889 addons.go:70] Setting ingress-dns=true in profile "addons-141726"
	I1202 15:16:24.579253  269889 addons.go:239] Setting addon ingress-dns=true in "addons-141726"
	I1202 15:16:24.579286  269889 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:16:24.579541  269889 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:16:24.578357  269889 addons.go:70] Setting inspektor-gadget=true in profile "addons-141726"
	I1202 15:16:24.579687  269889 addons.go:239] Setting addon inspektor-gadget=true in "addons-141726"
	I1202 15:16:24.579713  269889 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:16:24.579766  269889 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:16:24.578409  269889 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:16:24.579993  269889 addons.go:70] Setting storage-provisioner=true in profile "addons-141726"
	I1202 15:16:24.580016  269889 addons.go:239] Setting addon storage-provisioner=true in "addons-141726"
	I1202 15:16:24.580046  269889 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:16:24.578437  269889 addons.go:239] Setting addon registry-creds=true in "addons-141726"
	I1202 15:16:24.580086  269889 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:16:24.580292  269889 addons.go:70] Setting cloud-spanner=true in profile "addons-141726"
	I1202 15:16:24.580312  269889 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-141726"
	I1202 15:16:24.580318  269889 addons.go:239] Setting addon cloud-spanner=true in "addons-141726"
	I1202 15:16:24.580343  269889 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:16:24.580356  269889 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-141726"
	I1202 15:16:24.580381  269889 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:16:24.580489  269889 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-141726"
	I1202 15:16:24.580540  269889 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-141726"
	I1202 15:16:24.580572  269889 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:16:24.580593  269889 addons.go:70] Setting registry=true in profile "addons-141726"
	I1202 15:16:24.580623  269889 addons.go:239] Setting addon registry=true in "addons-141726"
	I1202 15:16:24.580650  269889 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:16:24.578407  269889 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:16:24.581211  269889 out.go:179] * Verifying Kubernetes components...
	I1202 15:16:24.582572  269889 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 15:16:24.588090  269889 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:16:24.588632  269889 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:16:24.588675  269889 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:16:24.589131  269889 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:16:24.589235  269889 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:16:24.589797  269889 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:16:24.590557  269889 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:16:24.592648  269889 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:16:24.614077  269889 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:16:24.626534  269889 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:16:24.637448  269889 addons.go:239] Setting addon default-storageclass=true in "addons-141726"
	I1202 15:16:24.637513  269889 host.go:66] Checking if "addons-141726" exists ...
	W1202 15:16:24.638214  269889 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1202 15:16:24.644403  269889 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:16:24.652115  269889 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-141726"
	I1202 15:16:24.652179  269889 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:16:24.652706  269889 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:16:24.666051  269889 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1202 15:16:24.667718  269889 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1202 15:16:24.668984  269889 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1202 15:16:24.669009  269889 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1202 15:16:24.669079  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:24.669530  269889 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1202 15:16:24.670225  269889 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1202 15:16:24.670358  269889 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1202 15:16:24.670371  269889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1202 15:16:24.670508  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:24.671435  269889 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1202 15:16:24.672614  269889 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1202 15:16:24.672773  269889 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1202 15:16:24.672839  269889 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1202 15:16:24.672853  269889 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1202 15:16:24.672944  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:24.673798  269889 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1202 15:16:24.673814  269889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1202 15:16:24.673859  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:24.674833  269889 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1202 15:16:24.675995  269889 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1202 15:16:24.677173  269889 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1202 15:16:24.678232  269889 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1202 15:16:24.682096  269889 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1202 15:16:24.682156  269889 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 15:16:24.683582  269889 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 15:16:24.683603  269889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 15:16:24.683680  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:24.683853  269889 out.go:179]   - Using image docker.io/registry:3.0.0
	I1202 15:16:24.684097  269889 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1202 15:16:24.684126  269889 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1202 15:16:24.685174  269889 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1202 15:16:24.685195  269889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1202 15:16:24.685255  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:24.685469  269889 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1202 15:16:24.685931  269889 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1202 15:16:24.685945  269889 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1202 15:16:24.686004  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:24.686784  269889 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1202 15:16:24.686918  269889 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1202 15:16:24.686952  269889 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1202 15:16:24.687032  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:24.689111  269889 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1202 15:16:24.690730  269889 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1202 15:16:24.690750  269889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1202 15:16:24.690805  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:24.704774  269889 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1202 15:16:24.712401  269889 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1202 15:16:24.715529  269889 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1202 15:16:24.715554  269889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1202 15:16:24.715642  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:24.716448  269889 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1202 15:16:24.716471  269889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1202 15:16:24.716601  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:24.720032  269889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:16:24.722962  269889 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1202 15:16:24.723839  269889 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 15:16:24.723873  269889 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 15:16:24.723936  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:24.725140  269889 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1202 15:16:24.725163  269889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1202 15:16:24.725223  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:24.730820  269889 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1202 15:16:24.732199  269889 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1202 15:16:24.732223  269889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1202 15:16:24.732295  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:24.735667  269889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:16:24.736780  269889 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1202 15:16:24.751814  269889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:16:24.759573  269889 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1202 15:16:24.760866  269889 out.go:179]   - Using image docker.io/busybox:stable
	I1202 15:16:24.762465  269889 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1202 15:16:24.762485  269889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1202 15:16:24.762643  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:24.763526  269889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:16:24.772776  269889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:16:24.775906  269889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:16:24.776201  269889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:16:24.778396  269889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:16:24.781646  269889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:16:24.782453  269889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:16:24.784759  269889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:16:24.786977  269889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:16:24.787140  269889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:16:24.789595  269889 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1202 15:16:24.792526  269889 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1202 15:16:24.793663  269889 retry.go:31] will retry after 127.434668ms: ssh: handshake failed: EOF
	I1202 15:16:24.805137  269889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	W1202 15:16:24.809032  269889 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1202 15:16:24.809073  269889 retry.go:31] will retry after 184.320088ms: ssh: handshake failed: EOF
	I1202 15:16:24.809375  269889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:16:24.903804  269889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1202 15:16:24.919505  269889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1202 15:16:24.928234  269889 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1202 15:16:24.928260  269889 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1202 15:16:24.928472  269889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1202 15:16:24.956360  269889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1202 15:16:24.957613  269889 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1202 15:16:24.957636  269889 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1202 15:16:24.961344  269889 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1202 15:16:24.961450  269889 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1202 15:16:24.962989  269889 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1202 15:16:24.963011  269889 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1202 15:16:24.963526  269889 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1202 15:16:24.963541  269889 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1202 15:16:24.981465  269889 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1202 15:16:24.981490  269889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1202 15:16:24.986046  269889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1202 15:16:24.987012  269889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 15:16:24.989795  269889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1202 15:16:24.992160  269889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 15:16:24.993599  269889 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1202 15:16:24.993618  269889 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1202 15:16:25.017568  269889 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1202 15:16:25.017748  269889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1202 15:16:25.023369  269889 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1202 15:16:25.023466  269889 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1202 15:16:25.027931  269889 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1202 15:16:25.027959  269889 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1202 15:16:25.031229  269889 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1202 15:16:25.031255  269889 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1202 15:16:25.031635  269889 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1202 15:16:25.031658  269889 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1202 15:16:25.069168  269889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1202 15:16:25.073804  269889 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1202 15:16:25.073835  269889 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1202 15:16:25.081865  269889 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1202 15:16:25.081894  269889 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1202 15:16:25.085535  269889 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1202 15:16:25.085563  269889 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1202 15:16:25.097230  269889 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1202 15:16:25.097267  269889 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1202 15:16:25.125189  269889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1202 15:16:25.143020  269889 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1202 15:16:25.143050  269889 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1202 15:16:25.144755  269889 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1202 15:16:25.145725  269889 node_ready.go:35] waiting up to 6m0s for node "addons-141726" to be "Ready" ...
	I1202 15:16:25.146574  269889 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1202 15:16:25.146592  269889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1202 15:16:25.155968  269889 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1202 15:16:25.155993  269889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1202 15:16:25.175571  269889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1202 15:16:25.190020  269889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1202 15:16:25.192517  269889 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1202 15:16:25.192542  269889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1202 15:16:25.197291  269889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1202 15:16:25.199278  269889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1202 15:16:25.269075  269889 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1202 15:16:25.269125  269889 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1202 15:16:25.337656  269889 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1202 15:16:25.337679  269889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1202 15:16:25.399358  269889 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1202 15:16:25.399381  269889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1202 15:16:25.498968  269889 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1202 15:16:25.499015  269889 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1202 15:16:25.555502  269889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1202 15:16:25.669711  269889 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-141726" context rescaled to 1 replicas
	I1202 15:16:26.213877  269889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.310031659s)
	I1202 15:16:26.213917  269889 addons.go:495] Verifying addon ingress=true in "addons-141726"
	I1202 15:16:26.213964  269889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.294424411s)
	I1202 15:16:26.214079  269889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.285584172s)
	I1202 15:16:26.214137  269889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.257749343s)
	I1202 15:16:26.214213  269889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.228132327s)
	I1202 15:16:26.214480  269889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.227442853s)
	I1202 15:16:26.214511  269889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.222323209s)
	I1202 15:16:26.214540  269889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.224720282s)
	I1202 15:16:26.214588  269889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.145392858s)
	I1202 15:16:26.214600  269889 addons.go:495] Verifying addon registry=true in "addons-141726"
	I1202 15:16:26.214648  269889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.089429282s)
	I1202 15:16:26.214734  269889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.039081855s)
	I1202 15:16:26.214751  269889 addons.go:495] Verifying addon metrics-server=true in "addons-141726"
	I1202 15:16:26.214831  269889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.024775746s)
	I1202 15:16:26.214917  269889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.017596623s)
	I1202 15:16:26.215726  269889 out.go:179] * Verifying ingress addon...
	I1202 15:16:26.215746  269889 out.go:179] * Verifying registry addon...
	I1202 15:16:26.216695  269889 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-141726 service yakd-dashboard -n yakd-dashboard
	
	I1202 15:16:26.217861  269889 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1202 15:16:26.218548  269889 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1202 15:16:26.221282  269889 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1202 15:16:26.221302  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:26.221472  269889 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1202 15:16:26.221488  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1202 15:16:26.224879  269889 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1202 15:16:26.721204  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:26.725929  269889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.526605486s)
	W1202 15:16:26.725984  269889 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1202 15:16:26.726013  269889 retry.go:31] will retry after 187.482016ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1202 15:16:26.726150  269889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.170603124s)
	I1202 15:16:26.726186  269889 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-141726"
	I1202 15:16:26.728278  269889 out.go:179] * Verifying csi-hostpath-driver addon...
	I1202 15:16:26.729519  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:26.730329  269889 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1202 15:16:26.733586  269889 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1202 15:16:26.733612  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:26.914385  269889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1202 15:16:27.149166  269889 node_ready.go:57] node "addons-141726" has "Ready":"False" status (will retry)
	I1202 15:16:27.221536  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:27.221721  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:27.233106  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:27.721501  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:27.721649  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:27.733000  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:28.222154  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:28.222206  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:28.233829  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:28.721223  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:28.721228  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:28.734091  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1202 15:16:29.149319  269889 node_ready.go:57] node "addons-141726" has "Ready":"False" status (will retry)
	I1202 15:16:29.221841  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:29.221980  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:29.232876  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:29.391629  269889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.477191246s)
	I1202 15:16:29.721943  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:29.722232  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:29.733722  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:30.221673  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:30.221726  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:30.233247  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:30.721654  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:30.721816  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:30.733559  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:31.221704  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:31.221887  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:31.233194  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1202 15:16:31.648621  269889 node_ready.go:57] node "addons-141726" has "Ready":"False" status (will retry)
	I1202 15:16:31.721751  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:31.721772  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:31.733368  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:32.221225  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:32.221278  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:32.233759  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:32.240986  269889 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1202 15:16:32.241048  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:32.260637  269889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:16:32.367624  269889 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1202 15:16:32.380914  269889 addons.go:239] Setting addon gcp-auth=true in "addons-141726"
	I1202 15:16:32.380965  269889 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:16:32.381301  269889 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:16:32.400618  269889 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1202 15:16:32.400660  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:32.419542  269889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:16:32.517095  269889 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1202 15:16:32.518624  269889 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1202 15:16:32.519774  269889 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1202 15:16:32.519798  269889 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1202 15:16:32.533327  269889 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1202 15:16:32.533361  269889 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1202 15:16:32.547519  269889 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1202 15:16:32.547542  269889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1202 15:16:32.561455  269889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1202 15:16:32.721860  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:32.721955  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:32.733443  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:32.879827  269889 addons.go:495] Verifying addon gcp-auth=true in "addons-141726"
	I1202 15:16:32.881055  269889 out.go:179] * Verifying gcp-auth addon...
	I1202 15:16:32.883436  269889 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1202 15:16:32.885567  269889 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1202 15:16:32.885591  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:33.221662  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:33.221787  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:33.233288  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:33.387215  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1202 15:16:33.649298  269889 node_ready.go:57] node "addons-141726" has "Ready":"False" status (will retry)
	I1202 15:16:33.721300  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:33.721702  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:33.822511  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:33.887223  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:34.221075  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:34.221491  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:34.233147  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:34.387189  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:34.721245  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:34.721321  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:34.733819  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:34.886346  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:35.220959  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:35.221182  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:35.233741  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:35.386670  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:35.722301  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:35.722313  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:35.733734  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:35.886637  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1202 15:16:36.149574  269889 node_ready.go:57] node "addons-141726" has "Ready":"False" status (will retry)
	I1202 15:16:36.221221  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:36.221588  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:36.233718  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:36.387200  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:36.650809  269889 node_ready.go:49] node "addons-141726" is "Ready"
	I1202 15:16:36.650858  269889 node_ready.go:38] duration metric: took 11.505100033s for node "addons-141726" to be "Ready" ...
	I1202 15:16:36.650878  269889 api_server.go:52] waiting for apiserver process to appear ...
	I1202 15:16:36.650939  269889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 15:16:36.674104  269889 api_server.go:72] duration metric: took 12.095908422s to wait for apiserver process to appear ...
	I1202 15:16:36.674140  269889 api_server.go:88] waiting for apiserver healthz status ...
	I1202 15:16:36.674168  269889 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 15:16:36.679660  269889 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1202 15:16:36.680671  269889 api_server.go:141] control plane version: v1.34.2
	I1202 15:16:36.680704  269889 api_server.go:131] duration metric: took 6.556216ms to wait for apiserver health ...
	I1202 15:16:36.680717  269889 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 15:16:36.684671  269889 system_pods.go:59] 20 kube-system pods found
	I1202 15:16:36.684709  269889 system_pods.go:61] "amd-gpu-device-plugin-5f7fs" [f2b19fdb-b25c-4936-aabf-26c33a233e0e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1202 15:16:36.684718  269889 system_pods.go:61] "coredns-66bc5c9577-4lmgt" [d46c8b2e-ddd0-4a4a-8250-61aea385667d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 15:16:36.684727  269889 system_pods.go:61] "csi-hostpath-attacher-0" [d80978c0-9200-4dc6-95c1-d84a76eefd36] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1202 15:16:36.684731  269889 system_pods.go:61] "csi-hostpath-resizer-0" [f665549e-f00a-4974-8e84-a683f0595510] Pending
	I1202 15:16:36.684736  269889 system_pods.go:61] "csi-hostpathplugin-kdbl4" [4497fccc-9a9f-4e59-8bf0-4f3cbf2596ce] Pending
	I1202 15:16:36.684740  269889 system_pods.go:61] "etcd-addons-141726" [821f25a8-606b-4801-a713-bb19c4d70b79] Running
	I1202 15:16:36.684745  269889 system_pods.go:61] "kindnet-6j8vt" [e79cc485-44b5-4858-a017-56f335770ce1] Running
	I1202 15:16:36.684749  269889 system_pods.go:61] "kube-apiserver-addons-141726" [98baf8b8-7320-4686-8e29-6b3c5001bdce] Running
	I1202 15:16:36.684752  269889 system_pods.go:61] "kube-controller-manager-addons-141726" [458659e3-701d-4f8c-9443-36b8cd099bb9] Running
	I1202 15:16:36.684758  269889 system_pods.go:61] "kube-ingress-dns-minikube" [08c0ee33-a0d3-4db5-95a5-7c75138c80f6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1202 15:16:36.684767  269889 system_pods.go:61] "kube-proxy-ngfdv" [18e885be-e7eb-4886-9c44-06e4630025c2] Running
	I1202 15:16:36.684770  269889 system_pods.go:61] "kube-scheduler-addons-141726" [44d110b6-3c0f-443f-a8c8-f70b0d783e3a] Running
	I1202 15:16:36.684775  269889 system_pods.go:61] "metrics-server-85b7d694d7-fdkfv" [29527913-8c48-4e43-932a-d58b491cf15d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 15:16:36.684779  269889 system_pods.go:61] "nvidia-device-plugin-daemonset-gdvkl" [387a816e-abc8-433b-85b4-4c9d2df06ea3] Pending
	I1202 15:16:36.684784  269889 system_pods.go:61] "registry-6b586f9694-4ndqk" [ca026742-659d-47f4-80ef-ccc67046c4d3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1202 15:16:36.684789  269889 system_pods.go:61] "registry-creds-764b6fb674-pw2zl" [39dd35ee-37c3-4b6e-a06a-17ebd9a9bf35] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1202 15:16:36.684792  269889 system_pods.go:61] "registry-proxy-md75n" [73921430-8808-4e00-888a-b97d19bf02e5] Pending
	I1202 15:16:36.684798  269889 system_pods.go:61] "snapshot-controller-7d9fbc56b8-2svxc" [dc40710d-232f-4cfd-a136-e042fd8c9c4a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 15:16:36.684805  269889 system_pods.go:61] "snapshot-controller-7d9fbc56b8-bxzws" [260f4023-60ed-4220-b262-009dc06daa3d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 15:16:36.684809  269889 system_pods.go:61] "storage-provisioner" [eba14afe-432c-429d-8dda-5734280cc7ca] Pending
	I1202 15:16:36.684816  269889 system_pods.go:74] duration metric: took 4.092822ms to wait for pod list to return data ...
	I1202 15:16:36.684826  269889 default_sa.go:34] waiting for default service account to be created ...
	I1202 15:16:36.687169  269889 default_sa.go:45] found service account: "default"
	I1202 15:16:36.687201  269889 default_sa.go:55] duration metric: took 2.368431ms for default service account to be created ...
	I1202 15:16:36.687214  269889 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 15:16:36.690395  269889 system_pods.go:86] 20 kube-system pods found
	I1202 15:16:36.690448  269889 system_pods.go:89] "amd-gpu-device-plugin-5f7fs" [f2b19fdb-b25c-4936-aabf-26c33a233e0e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1202 15:16:36.690455  269889 system_pods.go:89] "coredns-66bc5c9577-4lmgt" [d46c8b2e-ddd0-4a4a-8250-61aea385667d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 15:16:36.690463  269889 system_pods.go:89] "csi-hostpath-attacher-0" [d80978c0-9200-4dc6-95c1-d84a76eefd36] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1202 15:16:36.690468  269889 system_pods.go:89] "csi-hostpath-resizer-0" [f665549e-f00a-4974-8e84-a683f0595510] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1202 15:16:36.690472  269889 system_pods.go:89] "csi-hostpathplugin-kdbl4" [4497fccc-9a9f-4e59-8bf0-4f3cbf2596ce] Pending
	I1202 15:16:36.690475  269889 system_pods.go:89] "etcd-addons-141726" [821f25a8-606b-4801-a713-bb19c4d70b79] Running
	I1202 15:16:36.690479  269889 system_pods.go:89] "kindnet-6j8vt" [e79cc485-44b5-4858-a017-56f335770ce1] Running
	I1202 15:16:36.690483  269889 system_pods.go:89] "kube-apiserver-addons-141726" [98baf8b8-7320-4686-8e29-6b3c5001bdce] Running
	I1202 15:16:36.690487  269889 system_pods.go:89] "kube-controller-manager-addons-141726" [458659e3-701d-4f8c-9443-36b8cd099bb9] Running
	I1202 15:16:36.690496  269889 system_pods.go:89] "kube-ingress-dns-minikube" [08c0ee33-a0d3-4db5-95a5-7c75138c80f6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1202 15:16:36.690501  269889 system_pods.go:89] "kube-proxy-ngfdv" [18e885be-e7eb-4886-9c44-06e4630025c2] Running
	I1202 15:16:36.690511  269889 system_pods.go:89] "kube-scheduler-addons-141726" [44d110b6-3c0f-443f-a8c8-f70b0d783e3a] Running
	I1202 15:16:36.690516  269889 system_pods.go:89] "metrics-server-85b7d694d7-fdkfv" [29527913-8c48-4e43-932a-d58b491cf15d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 15:16:36.690523  269889 system_pods.go:89] "nvidia-device-plugin-daemonset-gdvkl" [387a816e-abc8-433b-85b4-4c9d2df06ea3] Pending
	I1202 15:16:36.690529  269889 system_pods.go:89] "registry-6b586f9694-4ndqk" [ca026742-659d-47f4-80ef-ccc67046c4d3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1202 15:16:36.690536  269889 system_pods.go:89] "registry-creds-764b6fb674-pw2zl" [39dd35ee-37c3-4b6e-a06a-17ebd9a9bf35] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1202 15:16:36.690540  269889 system_pods.go:89] "registry-proxy-md75n" [73921430-8808-4e00-888a-b97d19bf02e5] Pending
	I1202 15:16:36.690550  269889 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2svxc" [dc40710d-232f-4cfd-a136-e042fd8c9c4a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 15:16:36.690562  269889 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bxzws" [260f4023-60ed-4220-b262-009dc06daa3d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 15:16:36.690568  269889 system_pods.go:89] "storage-provisioner" [eba14afe-432c-429d-8dda-5734280cc7ca] Pending
	I1202 15:16:36.690588  269889 retry.go:31] will retry after 280.343325ms: missing components: kube-dns
	I1202 15:16:36.721405  269889 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1202 15:16:36.721444  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:36.721459  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:36.733706  269889 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1202 15:16:36.733731  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:36.886994  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:36.988633  269889 system_pods.go:86] 20 kube-system pods found
	I1202 15:16:36.988668  269889 system_pods.go:89] "amd-gpu-device-plugin-5f7fs" [f2b19fdb-b25c-4936-aabf-26c33a233e0e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1202 15:16:36.988675  269889 system_pods.go:89] "coredns-66bc5c9577-4lmgt" [d46c8b2e-ddd0-4a4a-8250-61aea385667d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 15:16:36.988683  269889 system_pods.go:89] "csi-hostpath-attacher-0" [d80978c0-9200-4dc6-95c1-d84a76eefd36] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1202 15:16:36.988690  269889 system_pods.go:89] "csi-hostpath-resizer-0" [f665549e-f00a-4974-8e84-a683f0595510] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1202 15:16:36.988697  269889 system_pods.go:89] "csi-hostpathplugin-kdbl4" [4497fccc-9a9f-4e59-8bf0-4f3cbf2596ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1202 15:16:36.988701  269889 system_pods.go:89] "etcd-addons-141726" [821f25a8-606b-4801-a713-bb19c4d70b79] Running
	I1202 15:16:36.988707  269889 system_pods.go:89] "kindnet-6j8vt" [e79cc485-44b5-4858-a017-56f335770ce1] Running
	I1202 15:16:36.988730  269889 system_pods.go:89] "kube-apiserver-addons-141726" [98baf8b8-7320-4686-8e29-6b3c5001bdce] Running
	I1202 15:16:36.988735  269889 system_pods.go:89] "kube-controller-manager-addons-141726" [458659e3-701d-4f8c-9443-36b8cd099bb9] Running
	I1202 15:16:36.988740  269889 system_pods.go:89] "kube-ingress-dns-minikube" [08c0ee33-a0d3-4db5-95a5-7c75138c80f6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1202 15:16:36.988743  269889 system_pods.go:89] "kube-proxy-ngfdv" [18e885be-e7eb-4886-9c44-06e4630025c2] Running
	I1202 15:16:36.988747  269889 system_pods.go:89] "kube-scheduler-addons-141726" [44d110b6-3c0f-443f-a8c8-f70b0d783e3a] Running
	I1202 15:16:36.988752  269889 system_pods.go:89] "metrics-server-85b7d694d7-fdkfv" [29527913-8c48-4e43-932a-d58b491cf15d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 15:16:36.988758  269889 system_pods.go:89] "nvidia-device-plugin-daemonset-gdvkl" [387a816e-abc8-433b-85b4-4c9d2df06ea3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1202 15:16:36.988769  269889 system_pods.go:89] "registry-6b586f9694-4ndqk" [ca026742-659d-47f4-80ef-ccc67046c4d3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1202 15:16:36.988777  269889 system_pods.go:89] "registry-creds-764b6fb674-pw2zl" [39dd35ee-37c3-4b6e-a06a-17ebd9a9bf35] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1202 15:16:36.988785  269889 system_pods.go:89] "registry-proxy-md75n" [73921430-8808-4e00-888a-b97d19bf02e5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1202 15:16:36.988790  269889 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2svxc" [dc40710d-232f-4cfd-a136-e042fd8c9c4a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 15:16:36.988800  269889 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bxzws" [260f4023-60ed-4220-b262-009dc06daa3d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 15:16:36.988806  269889 system_pods.go:89] "storage-provisioner" [eba14afe-432c-429d-8dda-5734280cc7ca] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 15:16:36.988825  269889 retry.go:31] will retry after 323.861425ms: missing components: kube-dns
	I1202 15:16:37.222859  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:37.223145  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:37.235350  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:37.318072  269889 system_pods.go:86] 20 kube-system pods found
	I1202 15:16:37.318115  269889 system_pods.go:89] "amd-gpu-device-plugin-5f7fs" [f2b19fdb-b25c-4936-aabf-26c33a233e0e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1202 15:16:37.318128  269889 system_pods.go:89] "coredns-66bc5c9577-4lmgt" [d46c8b2e-ddd0-4a4a-8250-61aea385667d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 15:16:37.318140  269889 system_pods.go:89] "csi-hostpath-attacher-0" [d80978c0-9200-4dc6-95c1-d84a76eefd36] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1202 15:16:37.318149  269889 system_pods.go:89] "csi-hostpath-resizer-0" [f665549e-f00a-4974-8e84-a683f0595510] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1202 15:16:37.318158  269889 system_pods.go:89] "csi-hostpathplugin-kdbl4" [4497fccc-9a9f-4e59-8bf0-4f3cbf2596ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1202 15:16:37.318166  269889 system_pods.go:89] "etcd-addons-141726" [821f25a8-606b-4801-a713-bb19c4d70b79] Running
	I1202 15:16:37.318173  269889 system_pods.go:89] "kindnet-6j8vt" [e79cc485-44b5-4858-a017-56f335770ce1] Running
	I1202 15:16:37.318181  269889 system_pods.go:89] "kube-apiserver-addons-141726" [98baf8b8-7320-4686-8e29-6b3c5001bdce] Running
	I1202 15:16:37.318190  269889 system_pods.go:89] "kube-controller-manager-addons-141726" [458659e3-701d-4f8c-9443-36b8cd099bb9] Running
	I1202 15:16:37.318208  269889 system_pods.go:89] "kube-ingress-dns-minikube" [08c0ee33-a0d3-4db5-95a5-7c75138c80f6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1202 15:16:37.318213  269889 system_pods.go:89] "kube-proxy-ngfdv" [18e885be-e7eb-4886-9c44-06e4630025c2] Running
	I1202 15:16:37.318219  269889 system_pods.go:89] "kube-scheduler-addons-141726" [44d110b6-3c0f-443f-a8c8-f70b0d783e3a] Running
	I1202 15:16:37.318227  269889 system_pods.go:89] "metrics-server-85b7d694d7-fdkfv" [29527913-8c48-4e43-932a-d58b491cf15d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 15:16:37.318238  269889 system_pods.go:89] "nvidia-device-plugin-daemonset-gdvkl" [387a816e-abc8-433b-85b4-4c9d2df06ea3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1202 15:16:37.318247  269889 system_pods.go:89] "registry-6b586f9694-4ndqk" [ca026742-659d-47f4-80ef-ccc67046c4d3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1202 15:16:37.318256  269889 system_pods.go:89] "registry-creds-764b6fb674-pw2zl" [39dd35ee-37c3-4b6e-a06a-17ebd9a9bf35] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1202 15:16:37.318263  269889 system_pods.go:89] "registry-proxy-md75n" [73921430-8808-4e00-888a-b97d19bf02e5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1202 15:16:37.318272  269889 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2svxc" [dc40710d-232f-4cfd-a136-e042fd8c9c4a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 15:16:37.318281  269889 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bxzws" [260f4023-60ed-4220-b262-009dc06daa3d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 15:16:37.318288  269889 system_pods.go:89] "storage-provisioner" [eba14afe-432c-429d-8dda-5734280cc7ca] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 15:16:37.318309  269889 retry.go:31] will retry after 323.063008ms: missing components: kube-dns
	I1202 15:16:37.387316  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:37.646043  269889 system_pods.go:86] 20 kube-system pods found
	I1202 15:16:37.646079  269889 system_pods.go:89] "amd-gpu-device-plugin-5f7fs" [f2b19fdb-b25c-4936-aabf-26c33a233e0e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1202 15:16:37.646085  269889 system_pods.go:89] "coredns-66bc5c9577-4lmgt" [d46c8b2e-ddd0-4a4a-8250-61aea385667d] Running
	I1202 15:16:37.646093  269889 system_pods.go:89] "csi-hostpath-attacher-0" [d80978c0-9200-4dc6-95c1-d84a76eefd36] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1202 15:16:37.646099  269889 system_pods.go:89] "csi-hostpath-resizer-0" [f665549e-f00a-4974-8e84-a683f0595510] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1202 15:16:37.646108  269889 system_pods.go:89] "csi-hostpathplugin-kdbl4" [4497fccc-9a9f-4e59-8bf0-4f3cbf2596ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1202 15:16:37.646113  269889 system_pods.go:89] "etcd-addons-141726" [821f25a8-606b-4801-a713-bb19c4d70b79] Running
	I1202 15:16:37.646119  269889 system_pods.go:89] "kindnet-6j8vt" [e79cc485-44b5-4858-a017-56f335770ce1] Running
	I1202 15:16:37.646126  269889 system_pods.go:89] "kube-apiserver-addons-141726" [98baf8b8-7320-4686-8e29-6b3c5001bdce] Running
	I1202 15:16:37.646139  269889 system_pods.go:89] "kube-controller-manager-addons-141726" [458659e3-701d-4f8c-9443-36b8cd099bb9] Running
	I1202 15:16:37.646147  269889 system_pods.go:89] "kube-ingress-dns-minikube" [08c0ee33-a0d3-4db5-95a5-7c75138c80f6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1202 15:16:37.646157  269889 system_pods.go:89] "kube-proxy-ngfdv" [18e885be-e7eb-4886-9c44-06e4630025c2] Running
	I1202 15:16:37.646162  269889 system_pods.go:89] "kube-scheduler-addons-141726" [44d110b6-3c0f-443f-a8c8-f70b0d783e3a] Running
	I1202 15:16:37.646170  269889 system_pods.go:89] "metrics-server-85b7d694d7-fdkfv" [29527913-8c48-4e43-932a-d58b491cf15d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 15:16:37.646176  269889 system_pods.go:89] "nvidia-device-plugin-daemonset-gdvkl" [387a816e-abc8-433b-85b4-4c9d2df06ea3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1202 15:16:37.646185  269889 system_pods.go:89] "registry-6b586f9694-4ndqk" [ca026742-659d-47f4-80ef-ccc67046c4d3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1202 15:16:37.646191  269889 system_pods.go:89] "registry-creds-764b6fb674-pw2zl" [39dd35ee-37c3-4b6e-a06a-17ebd9a9bf35] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1202 15:16:37.646200  269889 system_pods.go:89] "registry-proxy-md75n" [73921430-8808-4e00-888a-b97d19bf02e5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1202 15:16:37.646205  269889 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2svxc" [dc40710d-232f-4cfd-a136-e042fd8c9c4a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 15:16:37.646214  269889 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bxzws" [260f4023-60ed-4220-b262-009dc06daa3d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 15:16:37.646218  269889 system_pods.go:89] "storage-provisioner" [eba14afe-432c-429d-8dda-5734280cc7ca] Running
	I1202 15:16:37.646229  269889 system_pods.go:126] duration metric: took 959.007402ms to wait for k8s-apps to be running ...
	I1202 15:16:37.646244  269889 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 15:16:37.646296  269889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 15:16:37.661262  269889 system_svc.go:56] duration metric: took 15.005265ms WaitForService to wait for kubelet
	I1202 15:16:37.661302  269889 kubeadm.go:587] duration metric: took 13.083110952s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 15:16:37.661325  269889 node_conditions.go:102] verifying NodePressure condition ...
	I1202 15:16:37.664696  269889 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 15:16:37.664735  269889 node_conditions.go:123] node cpu capacity is 8
	I1202 15:16:37.664811  269889 node_conditions.go:105] duration metric: took 3.477691ms to run NodePressure ...
	I1202 15:16:37.664826  269889 start.go:242] waiting for startup goroutines ...
	I1202 15:16:37.721659  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:37.721730  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:37.733896  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:37.887290  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:38.221619  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:38.221640  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:38.233456  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:38.387159  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:38.721833  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:38.721909  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:38.733671  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:38.886363  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:39.221711  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:39.221803  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:39.233148  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:39.387995  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:39.721323  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:39.721509  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:39.733213  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:39.887127  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:40.221795  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:40.221862  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:40.233737  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:40.386360  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:40.722203  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:40.722351  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:40.734023  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:40.886739  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:41.222246  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:41.223678  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:41.234238  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:41.388516  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:41.721271  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:41.721917  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:41.735070  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:41.887750  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:42.222378  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:42.222586  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:42.234182  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:42.387228  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:42.721824  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:42.721856  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:42.733853  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:42.887024  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:43.222006  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:43.222007  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:43.233816  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:43.387119  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:43.721612  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:43.721647  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:43.733876  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:43.887245  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:44.221830  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:44.221863  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:44.234659  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:44.388292  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:44.722234  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:44.722379  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:44.734614  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:44.886136  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:45.221548  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:45.221601  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:45.233211  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:45.387333  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:45.722028  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:45.722315  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:45.733793  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:45.886906  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:46.220887  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:46.221607  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:46.233467  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:46.387273  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:46.721639  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:46.721696  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:46.734000  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:46.887394  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:47.221900  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:47.221994  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:47.233370  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:47.387157  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:47.721634  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:47.721757  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:47.733478  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:47.887190  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:48.220954  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:48.221014  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:48.234089  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:48.386666  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:48.721649  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:48.721673  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:48.733157  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:48.887201  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:49.221859  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:49.221965  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:49.234114  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:49.387039  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:49.721588  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:49.721778  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:49.734024  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:49.887239  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:50.222164  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:50.222254  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:50.234536  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:50.387701  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:50.721124  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:50.721660  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:50.734265  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:50.887192  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:51.221915  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:51.221955  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:51.233631  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:51.387583  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:51.721854  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:51.721874  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:51.733719  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:51.886713  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:52.221878  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:52.221957  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:52.234081  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:52.387486  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:52.721592  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:52.721764  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:52.732755  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:52.886170  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:53.221381  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:53.221749  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:53.233037  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:53.386815  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:53.721389  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:53.721560  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:53.734132  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:53.887006  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:54.221412  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:54.221877  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:54.234647  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:54.387583  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:54.722628  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:54.722661  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:54.733301  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:54.887852  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:55.222543  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:55.222591  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:55.233758  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:55.386710  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:55.721295  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:55.721637  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:55.736006  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:55.886881  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:56.221598  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:56.221730  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:56.233230  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:56.387215  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:56.721827  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:56.721852  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:56.733367  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:56.887082  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:57.221242  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:57.221546  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:57.233385  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:57.387194  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:57.721314  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:57.721386  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:57.734404  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:57.887666  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:58.221910  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:58.221954  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:58.234133  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:58.387052  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:58.724110  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:58.724195  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:58.735493  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:58.887788  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:59.223077  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:59.223121  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:59.233978  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:59.387165  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:59.721593  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:59.721657  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:59.733898  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:59.888693  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:00.221312  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:00.221376  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:17:00.234331  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:00.387357  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:00.895830  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:17:00.895845  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:00.895875  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:00.896132  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:01.243677  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:17:01.244019  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:01.244073  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:01.386940  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:01.721228  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:01.721919  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:17:01.733893  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:01.887255  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:02.221966  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:17:02.222145  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:02.234017  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:02.386504  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:02.722154  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:17:02.722194  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:02.734245  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:02.887274  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:03.221692  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:17:03.221859  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:03.233711  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:03.386527  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:03.721481  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:17:03.721537  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:03.733097  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:03.886835  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:04.220860  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:04.221547  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:17:04.233642  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:04.387249  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:04.721466  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:17:04.721499  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:04.734494  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:04.887655  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:05.221955  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:17:05.222142  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:05.234091  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:05.387461  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:05.722258  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:17:05.722618  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:05.734078  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:05.887012  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:06.220870  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:06.221568  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:17:06.233596  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:06.388161  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:06.721792  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:17:06.721800  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:06.733687  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:06.888296  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:07.221389  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:07.221775  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:17:07.234300  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:07.387591  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:07.721565  269889 kapi.go:107] duration metric: took 41.503014015s to wait for kubernetes.io/minikube-addons=registry ...
	I1202 15:17:07.721606  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:07.733328  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:07.887404  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:08.221956  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:08.233742  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:08.386939  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:08.723660  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:08.735635  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:08.888088  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:09.221553  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:09.233688  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:09.388253  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:09.721473  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:09.734254  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:09.887281  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:10.221626  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:10.233715  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:10.387160  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:10.721349  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:10.734442  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:10.888167  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:11.228171  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:11.235833  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:11.390164  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:11.721793  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:11.734570  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:11.887263  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:12.221464  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:12.234733  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:12.386917  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:12.721469  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:12.734267  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:12.929081  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:13.221617  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:13.233559  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:13.388049  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:13.721277  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:13.734438  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:13.887618  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:14.222338  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:14.233948  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:14.387253  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:14.722294  269889 kapi.go:107] duration metric: took 48.504428753s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1202 15:17:14.734696  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:14.886209  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:15.234128  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:15.387266  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:15.766697  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:15.886264  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:16.233783  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:16.387447  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:16.734261  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:16.886858  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:17.234848  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:17.386799  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:17.736217  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:17.887362  269889 kapi.go:107] duration metric: took 45.003939373s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1202 15:17:17.947223  269889 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-141726 cluster.
	I1202 15:17:18.084255  269889 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1202 15:17:18.094664  269889 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1202 15:17:18.234681  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:18.735177  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:19.233708  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:19.735072  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:20.233876  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:20.734236  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:21.234820  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:21.734747  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:22.234472  269889 kapi.go:107] duration metric: took 55.504138764s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1202 15:17:22.236368  269889 out.go:179] * Enabled addons: registry-creds, inspektor-gadget, amd-gpu-device-plugin, storage-provisioner, cloud-spanner, nvidia-device-plugin, metrics-server, ingress-dns, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1202 15:17:22.237575  269889 addons.go:530] duration metric: took 57.659360679s for enable addons: enabled=[registry-creds inspektor-gadget amd-gpu-device-plugin storage-provisioner cloud-spanner nvidia-device-plugin metrics-server ingress-dns yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1202 15:17:22.237618  269889 start.go:247] waiting for cluster config update ...
	I1202 15:17:22.237638  269889 start.go:256] writing updated cluster config ...
	I1202 15:17:22.237893  269889 ssh_runner.go:195] Run: rm -f paused
	I1202 15:17:22.241864  269889 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 15:17:22.245057  269889 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4lmgt" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 15:17:22.248877  269889 pod_ready.go:94] pod "coredns-66bc5c9577-4lmgt" is "Ready"
	I1202 15:17:22.248896  269889 pod_ready.go:86] duration metric: took 3.820816ms for pod "coredns-66bc5c9577-4lmgt" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 15:17:22.250575  269889 pod_ready.go:83] waiting for pod "etcd-addons-141726" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 15:17:22.253853  269889 pod_ready.go:94] pod "etcd-addons-141726" is "Ready"
	I1202 15:17:22.253872  269889 pod_ready.go:86] duration metric: took 3.279844ms for pod "etcd-addons-141726" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 15:17:22.255624  269889 pod_ready.go:83] waiting for pod "kube-apiserver-addons-141726" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 15:17:22.258858  269889 pod_ready.go:94] pod "kube-apiserver-addons-141726" is "Ready"
	I1202 15:17:22.258877  269889 pod_ready.go:86] duration metric: took 3.236011ms for pod "kube-apiserver-addons-141726" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 15:17:22.260514  269889 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-141726" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 15:17:22.645772  269889 pod_ready.go:94] pod "kube-controller-manager-addons-141726" is "Ready"
	I1202 15:17:22.645802  269889 pod_ready.go:86] duration metric: took 385.272457ms for pod "kube-controller-manager-addons-141726" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 15:17:22.846309  269889 pod_ready.go:83] waiting for pod "kube-proxy-ngfdv" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 15:17:23.246326  269889 pod_ready.go:94] pod "kube-proxy-ngfdv" is "Ready"
	I1202 15:17:23.246355  269889 pod_ready.go:86] duration metric: took 400.021885ms for pod "kube-proxy-ngfdv" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 15:17:23.446666  269889 pod_ready.go:83] waiting for pod "kube-scheduler-addons-141726" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 15:17:23.846203  269889 pod_ready.go:94] pod "kube-scheduler-addons-141726" is "Ready"
	I1202 15:17:23.846240  269889 pod_ready.go:86] duration metric: took 399.546779ms for pod "kube-scheduler-addons-141726" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 15:17:23.846257  269889 pod_ready.go:40] duration metric: took 1.604360055s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 15:17:23.892799  269889 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1202 15:17:23.974654  269889 out.go:179] * Done! kubectl is now configured to use "addons-141726" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 02 15:18:55 addons-141726 crio[777]: time="2025-12-02T15:18:55.450034849Z" level=info msg="Pulling image: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=97f5b0f4-530c-4605-bb4f-9a2ab415f866 name=/runtime.v1.ImageService/PullImage
	Dec 02 15:18:55 addons-141726 crio[777]: time="2025-12-02T15:18:55.454007403Z" level=info msg="Trying to access \"docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605\""
	Dec 02 15:18:57 addons-141726 crio[777]: time="2025-12-02T15:18:57.038153253Z" level=info msg="Pulled image: docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=97f5b0f4-530c-4605-bb4f-9a2ab415f866 name=/runtime.v1.ImageService/PullImage
	Dec 02 15:18:57 addons-141726 crio[777]: time="2025-12-02T15:18:57.038908165Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=c14bdc84-3295-4083-beb5-9f280e7a80f8 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 15:18:57 addons-141726 crio[777]: time="2025-12-02T15:18:57.072755741Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=70e95e3b-ef1c-4ca0-ad34-91280b2450e0 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 15:18:57 addons-141726 crio[777]: time="2025-12-02T15:18:57.077074387Z" level=info msg="Creating container: kube-system/registry-creds-764b6fb674-pw2zl/registry-creds" id=255c6eae-8c43-4ea6-8769-b7f7a972723f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 15:18:57 addons-141726 crio[777]: time="2025-12-02T15:18:57.077207383Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 15:18:57 addons-141726 crio[777]: time="2025-12-02T15:18:57.083185819Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 15:18:57 addons-141726 crio[777]: time="2025-12-02T15:18:57.083724916Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 15:18:57 addons-141726 crio[777]: time="2025-12-02T15:18:57.110605711Z" level=info msg="Created container e00f0e468c729689ef94a7bcf9f05744ea3b630f3b93460659837cda53f68881: kube-system/registry-creds-764b6fb674-pw2zl/registry-creds" id=255c6eae-8c43-4ea6-8769-b7f7a972723f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 15:18:57 addons-141726 crio[777]: time="2025-12-02T15:18:57.111232352Z" level=info msg="Starting container: e00f0e468c729689ef94a7bcf9f05744ea3b630f3b93460659837cda53f68881" id=55dab2cd-e86e-489b-9f67-176118c9362a name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 15:18:57 addons-141726 crio[777]: time="2025-12-02T15:18:57.113304812Z" level=info msg="Started container" PID=8873 containerID=e00f0e468c729689ef94a7bcf9f05744ea3b630f3b93460659837cda53f68881 description=kube-system/registry-creds-764b6fb674-pw2zl/registry-creds id=55dab2cd-e86e-489b-9f67-176118c9362a name=/runtime.v1.RuntimeService/StartContainer sandboxID=8740357b503d1ef72b5ac5bbe20480eae29629d6e2c27fda9443fea9a191ab58
	Dec 02 15:20:13 addons-141726 crio[777]: time="2025-12-02T15:20:13.559030729Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-2bmrl/POD" id=c2b3c330-dd33-4b53-9417-514d62b14787 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 15:20:13 addons-141726 crio[777]: time="2025-12-02T15:20:13.55911143Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 15:20:13 addons-141726 crio[777]: time="2025-12-02T15:20:13.565148184Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-2bmrl Namespace:default ID:d81b31bf15fa263def0dcaf3344b71a81d76730e76da33abce748d99edb56f3e UID:544dd814-f512-4a19-bd04-fa00a5e89ecd NetNS:/var/run/netns/858b5a92-bd16-46a2-8d91-6adec1d615e5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000d9a340}] Aliases:map[]}"
	Dec 02 15:20:13 addons-141726 crio[777]: time="2025-12-02T15:20:13.565180775Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-2bmrl to CNI network \"kindnet\" (type=ptp)"
	Dec 02 15:20:13 addons-141726 crio[777]: time="2025-12-02T15:20:13.576895251Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-2bmrl Namespace:default ID:d81b31bf15fa263def0dcaf3344b71a81d76730e76da33abce748d99edb56f3e UID:544dd814-f512-4a19-bd04-fa00a5e89ecd NetNS:/var/run/netns/858b5a92-bd16-46a2-8d91-6adec1d615e5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000d9a340}] Aliases:map[]}"
	Dec 02 15:20:13 addons-141726 crio[777]: time="2025-12-02T15:20:13.577033455Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-2bmrl for CNI network kindnet (type=ptp)"
	Dec 02 15:20:13 addons-141726 crio[777]: time="2025-12-02T15:20:13.577925031Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 02 15:20:13 addons-141726 crio[777]: time="2025-12-02T15:20:13.578719473Z" level=info msg="Ran pod sandbox d81b31bf15fa263def0dcaf3344b71a81d76730e76da33abce748d99edb56f3e with infra container: default/hello-world-app-5d498dc89-2bmrl/POD" id=c2b3c330-dd33-4b53-9417-514d62b14787 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 15:20:13 addons-141726 crio[777]: time="2025-12-02T15:20:13.580074136Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=64e9e89a-3840-42a6-8515-8cc7ae671d77 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 15:20:13 addons-141726 crio[777]: time="2025-12-02T15:20:13.580273815Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=64e9e89a-3840-42a6-8515-8cc7ae671d77 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 15:20:13 addons-141726 crio[777]: time="2025-12-02T15:20:13.580313239Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=64e9e89a-3840-42a6-8515-8cc7ae671d77 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 15:20:13 addons-141726 crio[777]: time="2025-12-02T15:20:13.581042175Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=3598188f-a68b-4339-97bc-f0cc969ff3a4 name=/runtime.v1.ImageService/PullImage
	Dec 02 15:20:13 addons-141726 crio[777]: time="2025-12-02T15:20:13.589924856Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	e00f0e468c729       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             About a minute ago   Running             registry-creds                           0                   8740357b503d1       registry-creds-764b6fb674-pw2zl            kube-system
	d6f0bfd80eaae       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                                              2 minutes ago        Running             nginx                                    0                   db0eff86e1f5f       nginx                                      default
	c50e4749cdb99       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago        Running             busybox                                  0                   e3aeb7956ff3b       busybox                                    default
	5412cbcb9dad2       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          2 minutes ago        Running             csi-snapshotter                          0                   24730fde38781       csi-hostpathplugin-kdbl4                   kube-system
	584adcb3687a9       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          2 minutes ago        Running             csi-provisioner                          0                   24730fde38781       csi-hostpathplugin-kdbl4                   kube-system
	00fe2f1035e36       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            2 minutes ago        Running             liveness-probe                           0                   24730fde38781       csi-hostpathplugin-kdbl4                   kube-system
	c829c27b2be0c       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           2 minutes ago        Running             hostpath                                 0                   24730fde38781       csi-hostpathplugin-kdbl4                   kube-system
	fadc45d9931b2       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 2 minutes ago        Running             gcp-auth                                 0                   0b0b94d99d0f9       gcp-auth-78565c9fb4-v79fk                  gcp-auth
	3307ae3898cde       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                2 minutes ago        Running             node-driver-registrar                    0                   24730fde38781       csi-hostpathplugin-kdbl4                   kube-system
	30003c9aa47bb       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             3 minutes ago        Running             controller                               0                   fd38856ee9de5       ingress-nginx-controller-6c8bf45fb-hqxvp   ingress-nginx
	9d97c1bc8794d       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            3 minutes ago        Running             gadget                                   0                   703d3da8f1e55       gadget-sbzvc                               gadget
	874bcd460b4cc       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              3 minutes ago        Running             registry-proxy                           0                   491c5508c92ba       registry-proxy-md75n                       kube-system
	a087fdb2d51ae       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago        Running             amd-gpu-device-plugin                    0                   86a4d5675b7e2       amd-gpu-device-plugin-5f7fs                kube-system
	44579fbc5bf76       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago        Running             csi-attacher                             0                   1172f295b5034       csi-hostpath-attacher-0                    kube-system
	398f34ffd447f       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago        Running             csi-resizer                              0                   6cf09c0fa449f       csi-hostpath-resizer-0                     kube-system
	51d82f122a468       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago        Running             csi-external-health-monitor-controller   0                   24730fde38781       csi-hostpathplugin-kdbl4                   kube-system
	1433ec009789d       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     3 minutes ago        Running             nvidia-device-plugin-ctr                 0                   ea9beb79218ef       nvidia-device-plugin-daemonset-gdvkl       kube-system
	f7167c650b5e1       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   36124a258c8cd       snapshot-controller-7d9fbc56b8-2svxc       kube-system
	d810aa1a8b1b9       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago        Running             yakd                                     0                   a0510f33396f5       yakd-dashboard-5ff678cb9-xl2rh             yakd-dashboard
	ba208fa7fba7a       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   580968b1bd643       snapshot-controller-7d9fbc56b8-bxzws       kube-system
	747643e95b13b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   3 minutes ago        Exited              patch                                    0                   a86be7d979883       ingress-nginx-admission-patch-dz5cl        ingress-nginx
	f253449a67080       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   3 minutes ago        Exited              create                                   0                   fd676a9deec95       ingress-nginx-admission-create-xjhbl       ingress-nginx
	b19e466fce39f       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               3 minutes ago        Running             cloud-spanner-emulator                   0                   e16aa8865ae21       cloud-spanner-emulator-5bdddb765-rjbxm     default
	515f2711c2508       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago        Running             registry                                 0                   b20e2fdb67c3a       registry-6b586f9694-4ndqk                  kube-system
	db056cf136978       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago        Running             minikube-ingress-dns                     0                   df085172f9221       kube-ingress-dns-minikube                  kube-system
	1925e5f023bf3       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago        Running             local-path-provisioner                   0                   7aec947ab3b54       local-path-provisioner-648f6765c9-9gbt7    local-path-storage
	8aef252bd37ce       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago        Running             metrics-server                           0                   09eef7b6ef683       metrics-server-85b7d694d7-fdkfv            kube-system
	ed6c258d3fc96       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago        Running             storage-provisioner                      0                   bdc14e1057501       storage-provisioner                        kube-system
	62cac40636a5b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago        Running             coredns                                  0                   aa3a6c671fa2e       coredns-66bc5c9577-4lmgt                   kube-system
	0d87471c625dc       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                                             3 minutes ago        Running             kube-proxy                               0                   bcf53154c86ac       kube-proxy-ngfdv                           kube-system
	ad0c6b77e41e3       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             3 minutes ago        Running             kindnet-cni                              0                   07058c03db816       kindnet-6j8vt                              kube-system
	8ae5e65fa7abb       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                                             3 minutes ago        Running             kube-controller-manager                  0                   52f09c08cc438       kube-controller-manager-addons-141726      kube-system
	d4ee4d2470fd1       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                                             3 minutes ago        Running             kube-apiserver                           0                   67faec2250512       kube-apiserver-addons-141726               kube-system
	762c736ec2bae       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             3 minutes ago        Running             etcd                                     0                   4edd88401748c       etcd-addons-141726                         kube-system
	2ad3385ae6c40       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                                             3 minutes ago        Running             kube-scheduler                           0                   76720f8880cd7       kube-scheduler-addons-141726               kube-system
	
	
	==> coredns [62cac40636a5b133526a2d722e6709f00017736dd1cfc7e3133b26af70363e46] <==
	[INFO] 10.244.0.22:57739 - 21531 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000127882s
	[INFO] 10.244.0.22:41847 - 43397 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.006989561s
	[INFO] 10.244.0.22:53632 - 31670 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.012419723s
	[INFO] 10.244.0.22:47138 - 2599 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005045187s
	[INFO] 10.244.0.22:57153 - 62265 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005427359s
	[INFO] 10.244.0.22:40503 - 40213 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004081528s
	[INFO] 10.244.0.22:42664 - 6280 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004944072s
	[INFO] 10.244.0.22:46145 - 6922 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001105682s
	[INFO] 10.244.0.22:50276 - 19793 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 268 0.002220391s
	[INFO] 10.244.0.26:38376 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000258558s
	[INFO] 10.244.0.26:48772 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000160683s
	[INFO] 10.244.0.31:40329 - 30774 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000231502s
	[INFO] 10.244.0.31:56075 - 40998 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000314815s
	[INFO] 10.244.0.31:38922 - 9170 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000147806s
	[INFO] 10.244.0.31:41643 - 60180 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000195452s
	[INFO] 10.244.0.31:45666 - 38299 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000096544s
	[INFO] 10.244.0.31:38503 - 21951 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000154243s
	[INFO] 10.244.0.31:35593 - 30156 "A IN accounts.google.com.europe-west2-a.c.k8s-minikube.internal. udp 76 false 512" NXDOMAIN qr,rd,ra 187 0.005266596s
	[INFO] 10.244.0.31:49267 - 53660 "AAAA IN accounts.google.com.europe-west2-a.c.k8s-minikube.internal. udp 76 false 512" NXDOMAIN qr,rd,ra 187 0.006963411s
	[INFO] 10.244.0.31:36238 - 51446 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.005805674s
	[INFO] 10.244.0.31:53650 - 25771 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.007024599s
	[INFO] 10.244.0.31:56882 - 64112 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.005890047s
	[INFO] 10.244.0.31:41373 - 27105 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.006306959s
	[INFO] 10.244.0.31:40106 - 27652 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.002160504s
	[INFO] 10.244.0.31:38205 - 57158 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.002288416s
	
	
	==> describe nodes <==
	Name:               addons-141726
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-141726
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=addons-141726
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T15_16_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-141726
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-141726"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 15:16:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-141726
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 15:20:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 15:20:13 +0000   Tue, 02 Dec 2025 15:16:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 15:20:13 +0000   Tue, 02 Dec 2025 15:16:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 15:20:13 +0000   Tue, 02 Dec 2025 15:16:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 15:20:13 +0000   Tue, 02 Dec 2025 15:16:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-141726
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                a9e18f89-559e-4220-a5c7-14350d2ece01
	  Boot ID:                    e00bac56-b076-4861-bc22-5d3b11269f73
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m50s
	  default                     cloud-spanner-emulator-5bdddb765-rjbxm      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	  default                     hello-world-app-5d498dc89-2bmrl             0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  gadget                      gadget-sbzvc                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	  gcp-auth                    gcp-auth-78565c9fb4-v79fk                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m42s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-hqxvp    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         3m48s
	  kube-system                 amd-gpu-device-plugin-5f7fs                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 coredns-66bc5c9577-4lmgt                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     3m49s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m48s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m48s
	  kube-system                 csi-hostpathplugin-kdbl4                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 etcd-addons-141726                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         3m55s
	  kube-system                 kindnet-6j8vt                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3m49s
	  kube-system                 kube-apiserver-addons-141726                250m (3%)     0 (0%)      0 (0%)           0 (0%)         3m56s
	  kube-system                 kube-controller-manager-addons-141726       200m (2%)     0 (0%)      0 (0%)           0 (0%)         3m55s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	  kube-system                 kube-proxy-ngfdv                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	  kube-system                 kube-scheduler-addons-141726                100m (1%)     0 (0%)      0 (0%)           0 (0%)         3m55s
	  kube-system                 metrics-server-85b7d694d7-fdkfv             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         3m49s
	  kube-system                 nvidia-device-plugin-daemonset-gdvkl        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 registry-6b586f9694-4ndqk                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	  kube-system                 registry-creds-764b6fb674-pw2zl             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	  kube-system                 registry-proxy-md75n                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 snapshot-controller-7d9fbc56b8-2svxc        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m48s
	  kube-system                 snapshot-controller-7d9fbc56b8-bxzws        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m48s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	  local-path-storage          local-path-provisioner-648f6765c9-9gbt7     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-xl2rh              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     3m48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m48s  kube-proxy       
	  Normal  Starting                 3m55s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m55s  kubelet          Node addons-141726 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m55s  kubelet          Node addons-141726 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m55s  kubelet          Node addons-141726 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m50s  node-controller  Node addons-141726 event: Registered Node addons-141726 in Controller
	  Normal  NodeReady                3m38s  kubelet          Node addons-141726 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 9b c8 59 55 e7 08 06
	[  +4.389247] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 07 ad 09 99 ea 08 06
	[Dec 2 15:17] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[  +1.025203] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[  +1.023929] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[Dec 2 15:18] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[  +1.023866] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[  +1.023913] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[  +2.047808] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[  +4.031697] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[  +8.511329] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[ +16.382712] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[Dec 2 15:19] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	
	
	==> etcd [762c736ec2bae28c970f9e38d7f5c0753e1d54378a7fa586b85636f18b0e547e] <==
	{"level":"warn","ts":"2025-12-02T15:16:16.600276Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:16.614848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:16.627281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:16.633831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:16.641212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:16.657511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:16.663645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:16.669922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:16.720657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:27.141314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:27.148820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:52.651512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:52.658184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:52.672377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:52.678895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:17:00.892826Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"172.415932ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-02T15:17:00.892923Z","caller":"traceutil/trace.go:172","msg":"trace[990663354] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1051; }","duration":"172.526409ms","start":"2025-12-02T15:17:00.720384Z","end":"2025-12-02T15:17:00.892911Z","steps":["trace[990663354] 'agreement among raft nodes before linearized reading'  (duration: 34.533196ms)","trace[990663354] 'range keys from in-memory index tree'  (duration: 137.85094ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T15:17:00.892873Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"172.441955ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-02T15:17:00.893031Z","caller":"traceutil/trace.go:172","msg":"trace[1250853938] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1051; }","duration":"172.622077ms","start":"2025-12-02T15:17:00.720396Z","end":"2025-12-02T15:17:00.893018Z","steps":["trace[1250853938] 'agreement among raft nodes before linearized reading'  (duration: 34.524535ms)","trace[1250853938] 'range keys from in-memory index tree'  (duration: 137.887411ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T15:17:00.893360Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"137.882048ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128041712795482208 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/nvidia-device-plugin-daemonset-gdvkl\" mod_revision:868 > success:<request_put:<key:\"/registry/pods/kube-system/nvidia-device-plugin-daemonset-gdvkl\" value_size:4441 >> failure:<request_range:<key:\"/registry/pods/kube-system/nvidia-device-plugin-daemonset-gdvkl\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-02T15:17:00.893437Z","caller":"traceutil/trace.go:172","msg":"trace[984492781] linearizableReadLoop","detail":"{readStateIndex:1077; appliedIndex:1076; }","duration":"138.534027ms","start":"2025-12-02T15:17:00.754878Z","end":"2025-12-02T15:17:00.893412Z","steps":["trace[984492781] 'read index received'  (duration: 24.828µs)","trace[984492781] 'applied index is now lower than readState.Index'  (duration: 138.508682ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-02T15:17:00.893474Z","caller":"traceutil/trace.go:172","msg":"trace[877079946] transaction","detail":"{read_only:false; response_revision:1052; number_of_response:1; }","duration":"264.022269ms","start":"2025-12-02T15:17:00.629434Z","end":"2025-12-02T15:17:00.893457Z","steps":["trace[877079946] 'process raft request'  (duration: 125.544312ms)","trace[877079946] 'compare'  (duration: 137.793061ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T15:17:00.893503Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"161.036312ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-02T15:17:00.893543Z","caller":"traceutil/trace.go:172","msg":"trace[951369430] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1052; }","duration":"161.077375ms","start":"2025-12-02T15:17:00.732457Z","end":"2025-12-02T15:17:00.893534Z","steps":["trace[951369430] 'agreement among raft nodes before linearized reading'  (duration: 161.011697ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T15:17:01.172193Z","caller":"traceutil/trace.go:172","msg":"trace[1342566136] transaction","detail":"{read_only:false; response_revision:1054; number_of_response:1; }","duration":"271.150061ms","start":"2025-12-02T15:17:00.901025Z","end":"2025-12-02T15:17:01.172175Z","steps":["trace[1342566136] 'process raft request'  (duration: 242.038794ms)","trace[1342566136] 'compare'  (duration: 29.005164ms)"],"step_count":2}
	
	
	==> gcp-auth [fadc45d9931b2c9a66e4bdd265caa4ccd0769d6687d165834032c549cc4b8fa4] <==
	2025/12/02 15:17:17 GCP Auth Webhook started!
	2025/12/02 15:17:24 Ready to marshal response ...
	2025/12/02 15:17:24 Ready to write response ...
	2025/12/02 15:17:24 Ready to marshal response ...
	2025/12/02 15:17:24 Ready to write response ...
	2025/12/02 15:17:24 Ready to marshal response ...
	2025/12/02 15:17:24 Ready to write response ...
	2025/12/02 15:17:38 Ready to marshal response ...
	2025/12/02 15:17:38 Ready to write response ...
	2025/12/02 15:17:38 Ready to marshal response ...
	2025/12/02 15:17:38 Ready to write response ...
	2025/12/02 15:17:45 Ready to marshal response ...
	2025/12/02 15:17:45 Ready to write response ...
	2025/12/02 15:17:47 Ready to marshal response ...
	2025/12/02 15:17:47 Ready to write response ...
	2025/12/02 15:17:48 Ready to marshal response ...
	2025/12/02 15:17:48 Ready to write response ...
	2025/12/02 15:17:52 Ready to marshal response ...
	2025/12/02 15:17:52 Ready to write response ...
	2025/12/02 15:18:10 Ready to marshal response ...
	2025/12/02 15:18:10 Ready to write response ...
	2025/12/02 15:20:13 Ready to marshal response ...
	2025/12/02 15:20:13 Ready to write response ...
	
	
	==> kernel <==
	 15:20:15 up  2:02,  0 user,  load average: 0.34, 1.11, 1.28
	Linux addons-141726 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ad0c6b77e41e3a16f35df40501b8e69476519899dc85d64c8b2cf07c30b31ce4] <==
	I1202 15:18:06.098479       1 main.go:301] handling current node
	I1202 15:18:16.098592       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:18:16.098641       1 main.go:301] handling current node
	I1202 15:18:26.100246       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:18:26.100291       1 main.go:301] handling current node
	I1202 15:18:36.098559       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:18:36.098590       1 main.go:301] handling current node
	I1202 15:18:46.106618       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:18:46.106658       1 main.go:301] handling current node
	I1202 15:18:56.103339       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:18:56.103379       1 main.go:301] handling current node
	I1202 15:19:06.101225       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:19:06.101266       1 main.go:301] handling current node
	I1202 15:19:16.106353       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:19:16.106387       1 main.go:301] handling current node
	I1202 15:19:26.099219       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:19:26.099247       1 main.go:301] handling current node
	I1202 15:19:36.097514       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:19:36.097559       1 main.go:301] handling current node
	I1202 15:19:46.098365       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:19:46.098396       1 main.go:301] handling current node
	I1202 15:19:56.100097       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:19:56.100128       1 main.go:301] handling current node
	I1202 15:20:06.098279       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:20:06.098310       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d4ee4d2470fd104c16bcc7fe722d5ffc59c1d0a056ffed4d4587e05f1855bf93] <==
	W1202 15:16:41.567803       1 handler_proxy.go:99] no RequestInfo found in the context
	E1202 15:16:41.568057       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1202 15:16:41.568088       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1202 15:16:41.568096       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1202 15:16:41.569243       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1202 15:16:45.580284       1 handler_proxy.go:99] no RequestInfo found in the context
	E1202 15:16:45.580337       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1202 15:16:45.580362       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.244.38:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.244.38:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	I1202 15:16:45.588894       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1202 15:16:52.651371       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1202 15:16:52.658151       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1202 15:16:52.672312       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1202 15:16:52.678896       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	E1202 15:17:34.769812       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:39610: use of closed network connection
	E1202 15:17:34.927363       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:39642: use of closed network connection
	I1202 15:17:48.205377       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1202 15:17:48.400225       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.207.215"}
	I1202 15:18:03.086971       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1202 15:20:13.324883       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.44.156"}
	
	
	==> kube-controller-manager [8ae5e65fa7abba6b7bd24be1ea23cf338ad905d35d5835b7da18a1374d635911] <==
	I1202 15:16:24.104224       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1202 15:16:24.104160       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1202 15:16:24.104774       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1202 15:16:24.108686       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 15:16:24.108772       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 15:16:24.108789       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1202 15:16:24.108800       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1202 15:16:24.108802       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1202 15:16:24.115254       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 15:16:24.123532       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1202 15:16:24.129812       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1202 15:16:24.135190       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1202 15:16:24.142446       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 15:16:24.150882       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1202 15:16:24.154129       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1202 15:16:24.154140       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1202 15:16:24.154157       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1202 15:16:24.154219       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1202 15:16:24.154225       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1202 15:16:39.105417       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1202 15:16:54.121646       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1202 15:16:54.121738       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1202 15:16:54.153720       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1202 15:16:54.222652       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 15:16:54.254132       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [0d87471c625dcf0f8df21886299158a0c0d136ac58fb47ca1dfde4ddef6434ad] <==
	I1202 15:16:25.853277       1 server_linux.go:53] "Using iptables proxy"
	I1202 15:16:25.962494       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 15:16:26.063134       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 15:16:26.063845       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1202 15:16:26.063975       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 15:16:26.087762       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 15:16:26.087816       1 server_linux.go:132] "Using iptables Proxier"
	I1202 15:16:26.094097       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 15:16:26.094560       1 server.go:527] "Version info" version="v1.34.2"
	I1202 15:16:26.094604       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 15:16:26.096806       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 15:16:26.096839       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 15:16:26.096857       1 config.go:200] "Starting service config controller"
	I1202 15:16:26.096864       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 15:16:26.096892       1 config.go:106] "Starting endpoint slice config controller"
	I1202 15:16:26.096897       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 15:16:26.097227       1 config.go:309] "Starting node config controller"
	I1202 15:16:26.097239       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 15:16:26.097252       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 15:16:26.197515       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 15:16:26.197514       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 15:16:26.197563       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2ad3385ae6c4074333e7cd6e406cceedb26093169a52ca39f8c4e0168ed2a9eb] <==
	E1202 15:16:17.124893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1202 15:16:17.124921       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1202 15:16:17.124954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1202 15:16:17.124974       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1202 15:16:17.124982       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1202 15:16:17.124998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1202 15:16:17.125004       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1202 15:16:17.125052       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1202 15:16:17.125080       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1202 15:16:17.125152       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1202 15:16:17.125234       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1202 15:16:17.125260       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1202 15:16:17.996089       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1202 15:16:18.021360       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1202 15:16:18.026918       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1202 15:16:18.078405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1202 15:16:18.110458       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1202 15:16:18.200195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1202 15:16:18.240698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1202 15:16:18.249838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1202 15:16:18.306060       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1202 15:16:18.310110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1202 15:16:18.370933       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1202 15:16:18.375027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1202 15:16:20.821964       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 02 15:18:18 addons-141726 kubelet[1278]: I1202 15:18:18.804361    1278 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^1cada9fe-cf92-11f0-8a1f-8e9453da8fd7" (OuterVolumeSpecName: "task-pv-storage") pod "31fe3732-d286-421b-ae0b-4c742ba67e88" (UID: "31fe3732-d286-421b-ae0b-4c742ba67e88"). InnerVolumeSpecName "pvc-ff77ad93-76fc-4323-b0c5-f40c4d477487". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Dec 02 15:18:18 addons-141726 kubelet[1278]: I1202 15:18:18.902013    1278 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/31fe3732-d286-421b-ae0b-4c742ba67e88-gcp-creds\") on node \"addons-141726\" DevicePath \"\""
	Dec 02 15:18:18 addons-141726 kubelet[1278]: I1202 15:18:18.902058    1278 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5zrdx\" (UniqueName: \"kubernetes.io/projected/31fe3732-d286-421b-ae0b-4c742ba67e88-kube-api-access-5zrdx\") on node \"addons-141726\" DevicePath \"\""
	Dec 02 15:18:18 addons-141726 kubelet[1278]: I1202 15:18:18.902102    1278 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-ff77ad93-76fc-4323-b0c5-f40c4d477487\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^1cada9fe-cf92-11f0-8a1f-8e9453da8fd7\") on node \"addons-141726\" "
	Dec 02 15:18:18 addons-141726 kubelet[1278]: I1202 15:18:18.906929    1278 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-ff77ad93-76fc-4323-b0c5-f40c4d477487" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^1cada9fe-cf92-11f0-8a1f-8e9453da8fd7") on node "addons-141726"
	Dec 02 15:18:19 addons-141726 kubelet[1278]: I1202 15:18:19.003016    1278 reconciler_common.go:299] "Volume detached for volume \"pvc-ff77ad93-76fc-4323-b0c5-f40c4d477487\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^1cada9fe-cf92-11f0-8a1f-8e9453da8fd7\") on node \"addons-141726\" DevicePath \"\""
	Dec 02 15:18:19 addons-141726 kubelet[1278]: I1202 15:18:19.030774    1278 scope.go:117] "RemoveContainer" containerID="0726b7a957fc5fe213e5e60ce722a46f077db78bc70d1deebad4368081208da6"
	Dec 02 15:18:19 addons-141726 kubelet[1278]: I1202 15:18:19.041466    1278 scope.go:117] "RemoveContainer" containerID="0726b7a957fc5fe213e5e60ce722a46f077db78bc70d1deebad4368081208da6"
	Dec 02 15:18:19 addons-141726 kubelet[1278]: E1202 15:18:19.041939    1278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0726b7a957fc5fe213e5e60ce722a46f077db78bc70d1deebad4368081208da6\": container with ID starting with 0726b7a957fc5fe213e5e60ce722a46f077db78bc70d1deebad4368081208da6 not found: ID does not exist" containerID="0726b7a957fc5fe213e5e60ce722a46f077db78bc70d1deebad4368081208da6"
	Dec 02 15:18:19 addons-141726 kubelet[1278]: I1202 15:18:19.041990    1278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0726b7a957fc5fe213e5e60ce722a46f077db78bc70d1deebad4368081208da6"} err="failed to get container status \"0726b7a957fc5fe213e5e60ce722a46f077db78bc70d1deebad4368081208da6\": rpc error: code = NotFound desc = could not find container \"0726b7a957fc5fe213e5e60ce722a46f077db78bc70d1deebad4368081208da6\": container with ID starting with 0726b7a957fc5fe213e5e60ce722a46f077db78bc70d1deebad4368081208da6 not found: ID does not exist"
	Dec 02 15:18:19 addons-141726 kubelet[1278]: I1202 15:18:19.413642    1278 scope.go:117] "RemoveContainer" containerID="6d5178be61a83bc680737ef70d1d79da907ba4d851d12da31716f14525bb326b"
	Dec 02 15:18:19 addons-141726 kubelet[1278]: I1202 15:18:19.422717    1278 scope.go:117] "RemoveContainer" containerID="ee7da34f110ee50ee309dd7ec289f3971a649ce3978ef9193181c23cbad71f82"
	Dec 02 15:18:19 addons-141726 kubelet[1278]: I1202 15:18:19.428650    1278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fe3732-d286-421b-ae0b-4c742ba67e88" path="/var/lib/kubelet/pods/31fe3732-d286-421b-ae0b-4c742ba67e88/volumes"
	Dec 02 15:18:19 addons-141726 kubelet[1278]: I1202 15:18:19.431093    1278 scope.go:117] "RemoveContainer" containerID="3334bfe46f7609903c21a3eb2da2b86d0a99c841068581b836890e02337078ca"
	Dec 02 15:18:19 addons-141726 kubelet[1278]: I1202 15:18:19.439161    1278 scope.go:117] "RemoveContainer" containerID="3ed71df5b0c1fdf236d5cffc52fc8741c0f0a4f4819465eaf1272da859c0d707"
	Dec 02 15:18:19 addons-141726 kubelet[1278]: I1202 15:18:19.446681    1278 scope.go:117] "RemoveContainer" containerID="0993cca017066af8e006634daffd73ad24c179e419142d4db5deeac4f15b56c7"
	Dec 02 15:18:31 addons-141726 kubelet[1278]: I1202 15:18:31.429063    1278 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-md75n" secret="" err="secret \"gcp-auth\" not found"
	Dec 02 15:18:39 addons-141726 kubelet[1278]: E1202 15:18:39.493930    1278 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-764b6fb674-pw2zl" podUID="39dd35ee-37c3-4b6e-a06a-17ebd9a9bf35"
	Dec 02 15:18:57 addons-141726 kubelet[1278]: I1202 15:18:57.191134    1278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-creds-764b6fb674-pw2zl" podStartSLOduration=150.600776867 podStartE2EDuration="2m32.191110282s" podCreationTimestamp="2025-12-02 15:16:25 +0000 UTC" firstStartedPulling="2025-12-02 15:18:55.449654109 +0000 UTC m=+156.108013876" lastFinishedPulling="2025-12-02 15:18:57.039987523 +0000 UTC m=+157.698347291" observedRunningTime="2025-12-02 15:18:57.189921112 +0000 UTC m=+157.848280898" watchObservedRunningTime="2025-12-02 15:18:57.191110282 +0000 UTC m=+157.849470068"
	Dec 02 15:19:21 addons-141726 kubelet[1278]: I1202 15:19:21.425949    1278 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-5f7fs" secret="" err="secret \"gcp-auth\" not found"
	Dec 02 15:19:25 addons-141726 kubelet[1278]: I1202 15:19:25.426189    1278 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-66bc5c9577-4lmgt" secret="" err="secret \"gcp-auth\" not found"
	Dec 02 15:19:32 addons-141726 kubelet[1278]: I1202 15:19:32.426031    1278 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-gdvkl" secret="" err="secret \"gcp-auth\" not found"
	Dec 02 15:19:41 addons-141726 kubelet[1278]: I1202 15:19:41.425783    1278 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-md75n" secret="" err="secret \"gcp-auth\" not found"
	Dec 02 15:20:13 addons-141726 kubelet[1278]: I1202 15:20:13.386995    1278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/544dd814-f512-4a19-bd04-fa00a5e89ecd-gcp-creds\") pod \"hello-world-app-5d498dc89-2bmrl\" (UID: \"544dd814-f512-4a19-bd04-fa00a5e89ecd\") " pod="default/hello-world-app-5d498dc89-2bmrl"
	Dec 02 15:20:13 addons-141726 kubelet[1278]: I1202 15:20:13.387075    1278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g47br\" (UniqueName: \"kubernetes.io/projected/544dd814-f512-4a19-bd04-fa00a5e89ecd-kube-api-access-g47br\") pod \"hello-world-app-5d498dc89-2bmrl\" (UID: \"544dd814-f512-4a19-bd04-fa00a5e89ecd\") " pod="default/hello-world-app-5d498dc89-2bmrl"
	
	
	==> storage-provisioner [ed6c258d3fc965340e6765fb91d648d86cbd0ec27ffbf12d3f7f75dc84c42fe2] <==
	W1202 15:19:49.911965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:19:51.914977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:19:51.918851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:19:53.921937       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:19:53.925977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:19:55.929204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:19:55.934202       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:19:57.938522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:19:57.943518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:19:59.946470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:19:59.950178       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:20:01.953239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:20:01.957068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:20:03.959969       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:20:03.964228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:20:05.967150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:20:05.971262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:20:07.974905       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:20:07.978823       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:20:09.982063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:20:09.986255       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:20:11.989640       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:20:11.995182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:20:13.999644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:20:14.006365       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-141726 -n addons-141726
helpers_test.go:269: (dbg) Run:  kubectl --context addons-141726 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-xjhbl ingress-nginx-admission-patch-dz5cl
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-141726 describe pod ingress-nginx-admission-create-xjhbl ingress-nginx-admission-patch-dz5cl
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-141726 describe pod ingress-nginx-admission-create-xjhbl ingress-nginx-admission-patch-dz5cl: exit status 1 (69.326142ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-xjhbl" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-dz5cl" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-141726 describe pod ingress-nginx-admission-create-xjhbl ingress-nginx-admission-patch-dz5cl: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-141726 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-141726 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (255.238885ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 15:20:15.905612  284509 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:20:15.905932  284509 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:20:15.905943  284509 out.go:374] Setting ErrFile to fd 2...
	I1202 15:20:15.905948  284509 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:20:15.906213  284509 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 15:20:15.906538  284509 mustload.go:66] Loading cluster: addons-141726
	I1202 15:20:15.906852  284509 config.go:182] Loaded profile config "addons-141726": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 15:20:15.906876  284509 addons.go:622] checking whether the cluster is paused
	I1202 15:20:15.906957  284509 config.go:182] Loaded profile config "addons-141726": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 15:20:15.906974  284509 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:20:15.907353  284509 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:20:15.928115  284509 ssh_runner.go:195] Run: systemctl --version
	I1202 15:20:15.928183  284509 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:20:15.946889  284509 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:20:16.047030  284509 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 15:20:16.047101  284509 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 15:20:16.076945  284509 cri.go:89] found id: "e00f0e468c729689ef94a7bcf9f05744ea3b630f3b93460659837cda53f68881"
	I1202 15:20:16.076988  284509 cri.go:89] found id: "5412cbcb9dad23c931f90a92d80b1e256500b274b20fcff6807ec93ce486087b"
	I1202 15:20:16.076996  284509 cri.go:89] found id: "584adcb3687a901fe5060be0a1fb1600c34509b47bb4484486d6cd5d48c6ffad"
	I1202 15:20:16.077002  284509 cri.go:89] found id: "00fe2f1035e365363b059dcce9f3e0b81a3ad886b8c75acee254b56191e6e863"
	I1202 15:20:16.077006  284509 cri.go:89] found id: "c829c27b2be0ca5be71ec02d7cf4e0c49251f9e60910a0e3454bbcddca4fafcd"
	I1202 15:20:16.077013  284509 cri.go:89] found id: "3307ae3898cde405eec22b3674ace2b05333d6bbcffefc76a3d7305a9043c2e4"
	I1202 15:20:16.077020  284509 cri.go:89] found id: "874bcd460b4cc02e9dad7f167a328196ff5beb9a1012aaf8f5f4f2a393d906b8"
	I1202 15:20:16.077025  284509 cri.go:89] found id: "a087fdb2d51ae5fccdb5086d81846803a6e8c037c736644f24867541b19508e7"
	I1202 15:20:16.077031  284509 cri.go:89] found id: "44579fbc5bf765a529aaedb60d661600810b558ed14b53a2791275b051e41cea"
	I1202 15:20:16.077056  284509 cri.go:89] found id: "398f34ffd447f6b12322f761f0de9ff3970accc3446665dd1322ad321ade5e55"
	I1202 15:20:16.077066  284509 cri.go:89] found id: "51d82f122a46867d50393b13527c7d9616831cbadd2d989162e47bf6ff9995bf"
	I1202 15:20:16.077069  284509 cri.go:89] found id: "1433ec009789d9dece3a29309283858693fe9891c2c0deadf76da3ccde6e4d3a"
	I1202 15:20:16.077072  284509 cri.go:89] found id: "f7167c650b5e128373caeda86412c6686ea43f9fca94eebbea1e6330f1681df7"
	I1202 15:20:16.077075  284509 cri.go:89] found id: "ba208fa7fba7a8aa77172696fd226f8981ccd6cb050eebd1eaf36ffd634ae40b"
	I1202 15:20:16.077078  284509 cri.go:89] found id: "515f2711c25082029bfb73f256ab315837df06ad3ce2e28c43e6ca4f915ff98a"
	I1202 15:20:16.077086  284509 cri.go:89] found id: "db056cf136978dcdc941d292176e5dd0ae726b09d2af66bd9cfe26cd49867515"
	I1202 15:20:16.077090  284509 cri.go:89] found id: "8aef252bd37ce68223492a3d106cdc49f9000fcca914af4aed5855230552d3cf"
	I1202 15:20:16.077095  284509 cri.go:89] found id: "ed6c258d3fc965340e6765fb91d648d86cbd0ec27ffbf12d3f7f75dc84c42fe2"
	I1202 15:20:16.077098  284509 cri.go:89] found id: "62cac40636a5b133526a2d722e6709f00017736dd1cfc7e3133b26af70363e46"
	I1202 15:20:16.077104  284509 cri.go:89] found id: "0d87471c625dcf0f8df21886299158a0c0d136ac58fb47ca1dfde4ddef6434ad"
	I1202 15:20:16.077107  284509 cri.go:89] found id: "ad0c6b77e41e3a16f35df40501b8e69476519899dc85d64c8b2cf07c30b31ce4"
	I1202 15:20:16.077110  284509 cri.go:89] found id: "8ae5e65fa7abba6b7bd24be1ea23cf338ad905d35d5835b7da18a1374d635911"
	I1202 15:20:16.077113  284509 cri.go:89] found id: "d4ee4d2470fd104c16bcc7fe722d5ffc59c1d0a056ffed4d4587e05f1855bf93"
	I1202 15:20:16.077116  284509 cri.go:89] found id: "762c736ec2bae28c970f9e38d7f5c0753e1d54378a7fa586b85636f18b0e547e"
	I1202 15:20:16.077119  284509 cri.go:89] found id: "2ad3385ae6c4074333e7cd6e406cceedb26093169a52ca39f8c4e0168ed2a9eb"
	I1202 15:20:16.077122  284509 cri.go:89] found id: ""
	I1202 15:20:16.077172  284509 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 15:20:16.092383  284509 out.go:203] 
	W1202 15:20:16.093543  284509 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T15:20:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T15:20:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 15:20:16.093561  284509 out.go:285] * 
	* 
	W1202 15:20:16.096838  284509 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 15:20:16.098172  284509 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-141726 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-141726 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-141726 addons disable ingress --alsologtostderr -v=1: exit status 11 (251.36783ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 15:20:16.160337  284573 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:20:16.160507  284573 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:20:16.160519  284573 out.go:374] Setting ErrFile to fd 2...
	I1202 15:20:16.160523  284573 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:20:16.160714  284573 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 15:20:16.160983  284573 mustload.go:66] Loading cluster: addons-141726
	I1202 15:20:16.161308  284573 config.go:182] Loaded profile config "addons-141726": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 15:20:16.161331  284573 addons.go:622] checking whether the cluster is paused
	I1202 15:20:16.161411  284573 config.go:182] Loaded profile config "addons-141726": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 15:20:16.161451  284573 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:20:16.161877  284573 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:20:16.179826  284573 ssh_runner.go:195] Run: systemctl --version
	I1202 15:20:16.179887  284573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:20:16.197743  284573 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:20:16.297161  284573 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 15:20:16.297301  284573 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 15:20:16.328037  284573 cri.go:89] found id: "e00f0e468c729689ef94a7bcf9f05744ea3b630f3b93460659837cda53f68881"
	I1202 15:20:16.328069  284573 cri.go:89] found id: "5412cbcb9dad23c931f90a92d80b1e256500b274b20fcff6807ec93ce486087b"
	I1202 15:20:16.328076  284573 cri.go:89] found id: "584adcb3687a901fe5060be0a1fb1600c34509b47bb4484486d6cd5d48c6ffad"
	I1202 15:20:16.328082  284573 cri.go:89] found id: "00fe2f1035e365363b059dcce9f3e0b81a3ad886b8c75acee254b56191e6e863"
	I1202 15:20:16.328087  284573 cri.go:89] found id: "c829c27b2be0ca5be71ec02d7cf4e0c49251f9e60910a0e3454bbcddca4fafcd"
	I1202 15:20:16.328092  284573 cri.go:89] found id: "3307ae3898cde405eec22b3674ace2b05333d6bbcffefc76a3d7305a9043c2e4"
	I1202 15:20:16.328097  284573 cri.go:89] found id: "874bcd460b4cc02e9dad7f167a328196ff5beb9a1012aaf8f5f4f2a393d906b8"
	I1202 15:20:16.328101  284573 cri.go:89] found id: "a087fdb2d51ae5fccdb5086d81846803a6e8c037c736644f24867541b19508e7"
	I1202 15:20:16.328106  284573 cri.go:89] found id: "44579fbc5bf765a529aaedb60d661600810b558ed14b53a2791275b051e41cea"
	I1202 15:20:16.328113  284573 cri.go:89] found id: "398f34ffd447f6b12322f761f0de9ff3970accc3446665dd1322ad321ade5e55"
	I1202 15:20:16.328120  284573 cri.go:89] found id: "51d82f122a46867d50393b13527c7d9616831cbadd2d989162e47bf6ff9995bf"
	I1202 15:20:16.328123  284573 cri.go:89] found id: "1433ec009789d9dece3a29309283858693fe9891c2c0deadf76da3ccde6e4d3a"
	I1202 15:20:16.328129  284573 cri.go:89] found id: "f7167c650b5e128373caeda86412c6686ea43f9fca94eebbea1e6330f1681df7"
	I1202 15:20:16.328145  284573 cri.go:89] found id: "ba208fa7fba7a8aa77172696fd226f8981ccd6cb050eebd1eaf36ffd634ae40b"
	I1202 15:20:16.328154  284573 cri.go:89] found id: "515f2711c25082029bfb73f256ab315837df06ad3ce2e28c43e6ca4f915ff98a"
	I1202 15:20:16.328178  284573 cri.go:89] found id: "db056cf136978dcdc941d292176e5dd0ae726b09d2af66bd9cfe26cd49867515"
	I1202 15:20:16.328190  284573 cri.go:89] found id: "8aef252bd37ce68223492a3d106cdc49f9000fcca914af4aed5855230552d3cf"
	I1202 15:20:16.328195  284573 cri.go:89] found id: "ed6c258d3fc965340e6765fb91d648d86cbd0ec27ffbf12d3f7f75dc84c42fe2"
	I1202 15:20:16.328199  284573 cri.go:89] found id: "62cac40636a5b133526a2d722e6709f00017736dd1cfc7e3133b26af70363e46"
	I1202 15:20:16.328204  284573 cri.go:89] found id: "0d87471c625dcf0f8df21886299158a0c0d136ac58fb47ca1dfde4ddef6434ad"
	I1202 15:20:16.328212  284573 cri.go:89] found id: "ad0c6b77e41e3a16f35df40501b8e69476519899dc85d64c8b2cf07c30b31ce4"
	I1202 15:20:16.328216  284573 cri.go:89] found id: "8ae5e65fa7abba6b7bd24be1ea23cf338ad905d35d5835b7da18a1374d635911"
	I1202 15:20:16.328218  284573 cri.go:89] found id: "d4ee4d2470fd104c16bcc7fe722d5ffc59c1d0a056ffed4d4587e05f1855bf93"
	I1202 15:20:16.328221  284573 cri.go:89] found id: "762c736ec2bae28c970f9e38d7f5c0753e1d54378a7fa586b85636f18b0e547e"
	I1202 15:20:16.328223  284573 cri.go:89] found id: "2ad3385ae6c4074333e7cd6e406cceedb26093169a52ca39f8c4e0168ed2a9eb"
	I1202 15:20:16.328227  284573 cri.go:89] found id: ""
	I1202 15:20:16.328285  284573 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 15:20:16.343706  284573 out.go:203] 
	W1202 15:20:16.344907  284573 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T15:20:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T15:20:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 15:20:16.344938  284573 out.go:285] * 
	* 
	W1202 15:20:16.348104  284573 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 15:20:16.349441  284573 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-141726 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (148.40s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.32s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-sbzvc" [0c3ea4e6-0d89-45a6-8d89-164a6f4dda43] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005789424s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-141726 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-141726 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (310.323644ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 15:17:54.413039  281371 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:17:54.413187  281371 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:17:54.413205  281371 out.go:374] Setting ErrFile to fd 2...
	I1202 15:17:54.413212  281371 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:17:54.413894  281371 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 15:17:54.414282  281371 mustload.go:66] Loading cluster: addons-141726
	I1202 15:17:54.414770  281371 config.go:182] Loaded profile config "addons-141726": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 15:17:54.414802  281371 addons.go:622] checking whether the cluster is paused
	I1202 15:17:54.414936  281371 config.go:182] Loaded profile config "addons-141726": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 15:17:54.414962  281371 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:17:54.415586  281371 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:17:54.436996  281371 ssh_runner.go:195] Run: systemctl --version
	I1202 15:17:54.437081  281371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:17:54.460796  281371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:17:54.572585  281371 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 15:17:54.572706  281371 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 15:17:54.613588  281371 cri.go:89] found id: "5412cbcb9dad23c931f90a92d80b1e256500b274b20fcff6807ec93ce486087b"
	I1202 15:17:54.613664  281371 cri.go:89] found id: "584adcb3687a901fe5060be0a1fb1600c34509b47bb4484486d6cd5d48c6ffad"
	I1202 15:17:54.613672  281371 cri.go:89] found id: "00fe2f1035e365363b059dcce9f3e0b81a3ad886b8c75acee254b56191e6e863"
	I1202 15:17:54.613678  281371 cri.go:89] found id: "c829c27b2be0ca5be71ec02d7cf4e0c49251f9e60910a0e3454bbcddca4fafcd"
	I1202 15:17:54.613682  281371 cri.go:89] found id: "3307ae3898cde405eec22b3674ace2b05333d6bbcffefc76a3d7305a9043c2e4"
	I1202 15:17:54.613687  281371 cri.go:89] found id: "874bcd460b4cc02e9dad7f167a328196ff5beb9a1012aaf8f5f4f2a393d906b8"
	I1202 15:17:54.613691  281371 cri.go:89] found id: "a087fdb2d51ae5fccdb5086d81846803a6e8c037c736644f24867541b19508e7"
	I1202 15:17:54.613696  281371 cri.go:89] found id: "44579fbc5bf765a529aaedb60d661600810b558ed14b53a2791275b051e41cea"
	I1202 15:17:54.613701  281371 cri.go:89] found id: "398f34ffd447f6b12322f761f0de9ff3970accc3446665dd1322ad321ade5e55"
	I1202 15:17:54.613715  281371 cri.go:89] found id: "51d82f122a46867d50393b13527c7d9616831cbadd2d989162e47bf6ff9995bf"
	I1202 15:17:54.613720  281371 cri.go:89] found id: "1433ec009789d9dece3a29309283858693fe9891c2c0deadf76da3ccde6e4d3a"
	I1202 15:17:54.613732  281371 cri.go:89] found id: "f7167c650b5e128373caeda86412c6686ea43f9fca94eebbea1e6330f1681df7"
	I1202 15:17:54.613738  281371 cri.go:89] found id: "ba208fa7fba7a8aa77172696fd226f8981ccd6cb050eebd1eaf36ffd634ae40b"
	I1202 15:17:54.613743  281371 cri.go:89] found id: "515f2711c25082029bfb73f256ab315837df06ad3ce2e28c43e6ca4f915ff98a"
	I1202 15:17:54.613748  281371 cri.go:89] found id: "db056cf136978dcdc941d292176e5dd0ae726b09d2af66bd9cfe26cd49867515"
	I1202 15:17:54.613763  281371 cri.go:89] found id: "8aef252bd37ce68223492a3d106cdc49f9000fcca914af4aed5855230552d3cf"
	I1202 15:17:54.613768  281371 cri.go:89] found id: "ed6c258d3fc965340e6765fb91d648d86cbd0ec27ffbf12d3f7f75dc84c42fe2"
	I1202 15:17:54.613776  281371 cri.go:89] found id: "62cac40636a5b133526a2d722e6709f00017736dd1cfc7e3133b26af70363e46"
	I1202 15:17:54.613781  281371 cri.go:89] found id: "0d87471c625dcf0f8df21886299158a0c0d136ac58fb47ca1dfde4ddef6434ad"
	I1202 15:17:54.613786  281371 cri.go:89] found id: "ad0c6b77e41e3a16f35df40501b8e69476519899dc85d64c8b2cf07c30b31ce4"
	I1202 15:17:54.613795  281371 cri.go:89] found id: "8ae5e65fa7abba6b7bd24be1ea23cf338ad905d35d5835b7da18a1374d635911"
	I1202 15:17:54.613800  281371 cri.go:89] found id: "d4ee4d2470fd104c16bcc7fe722d5ffc59c1d0a056ffed4d4587e05f1855bf93"
	I1202 15:17:54.613804  281371 cri.go:89] found id: "762c736ec2bae28c970f9e38d7f5c0753e1d54378a7fa586b85636f18b0e547e"
	I1202 15:17:54.613809  281371 cri.go:89] found id: "2ad3385ae6c4074333e7cd6e406cceedb26093169a52ca39f8c4e0168ed2a9eb"
	I1202 15:17:54.613813  281371 cri.go:89] found id: ""
	I1202 15:17:54.613864  281371 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 15:17:54.634137  281371 out.go:203] 
	W1202 15:17:54.635537  281371 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T15:17:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T15:17:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 15:17:54.635560  281371 out.go:285] * 
	* 
	W1202 15:17:54.640988  281371 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 15:17:54.642597  281371 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-141726 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.32s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.37s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.401451ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-fdkfv" [29527913-8c48-4e43-932a-d58b491cf15d] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00386302s
addons_test.go:463: (dbg) Run:  kubectl --context addons-141726 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-141726 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-141726 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (291.061959ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 15:17:50.851630  280389 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:17:50.851924  280389 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:17:50.851929  280389 out.go:374] Setting ErrFile to fd 2...
	I1202 15:17:50.851935  280389 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:17:50.852221  280389 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 15:17:50.852550  280389 mustload.go:66] Loading cluster: addons-141726
	I1202 15:17:50.853053  280389 config.go:182] Loaded profile config "addons-141726": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 15:17:50.853071  280389 addons.go:622] checking whether the cluster is paused
	I1202 15:17:50.853194  280389 config.go:182] Loaded profile config "addons-141726": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 15:17:50.853209  280389 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:17:50.853792  280389 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:17:50.876952  280389 ssh_runner.go:195] Run: systemctl --version
	I1202 15:17:50.877015  280389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:17:50.901533  280389 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:17:51.009582  280389 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 15:17:51.009690  280389 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 15:17:51.045752  280389 cri.go:89] found id: "5412cbcb9dad23c931f90a92d80b1e256500b274b20fcff6807ec93ce486087b"
	I1202 15:17:51.045779  280389 cri.go:89] found id: "584adcb3687a901fe5060be0a1fb1600c34509b47bb4484486d6cd5d48c6ffad"
	I1202 15:17:51.045786  280389 cri.go:89] found id: "00fe2f1035e365363b059dcce9f3e0b81a3ad886b8c75acee254b56191e6e863"
	I1202 15:17:51.045791  280389 cri.go:89] found id: "c829c27b2be0ca5be71ec02d7cf4e0c49251f9e60910a0e3454bbcddca4fafcd"
	I1202 15:17:51.045795  280389 cri.go:89] found id: "3307ae3898cde405eec22b3674ace2b05333d6bbcffefc76a3d7305a9043c2e4"
	I1202 15:17:51.045801  280389 cri.go:89] found id: "874bcd460b4cc02e9dad7f167a328196ff5beb9a1012aaf8f5f4f2a393d906b8"
	I1202 15:17:51.045806  280389 cri.go:89] found id: "a087fdb2d51ae5fccdb5086d81846803a6e8c037c736644f24867541b19508e7"
	I1202 15:17:51.045811  280389 cri.go:89] found id: "44579fbc5bf765a529aaedb60d661600810b558ed14b53a2791275b051e41cea"
	I1202 15:17:51.045815  280389 cri.go:89] found id: "398f34ffd447f6b12322f761f0de9ff3970accc3446665dd1322ad321ade5e55"
	I1202 15:17:51.045829  280389 cri.go:89] found id: "51d82f122a46867d50393b13527c7d9616831cbadd2d989162e47bf6ff9995bf"
	I1202 15:17:51.045838  280389 cri.go:89] found id: "1433ec009789d9dece3a29309283858693fe9891c2c0deadf76da3ccde6e4d3a"
	I1202 15:17:51.045843  280389 cri.go:89] found id: "f7167c650b5e128373caeda86412c6686ea43f9fca94eebbea1e6330f1681df7"
	I1202 15:17:51.045848  280389 cri.go:89] found id: "ba208fa7fba7a8aa77172696fd226f8981ccd6cb050eebd1eaf36ffd634ae40b"
	I1202 15:17:51.045852  280389 cri.go:89] found id: "515f2711c25082029bfb73f256ab315837df06ad3ce2e28c43e6ca4f915ff98a"
	I1202 15:17:51.045856  280389 cri.go:89] found id: "db056cf136978dcdc941d292176e5dd0ae726b09d2af66bd9cfe26cd49867515"
	I1202 15:17:51.045877  280389 cri.go:89] found id: "8aef252bd37ce68223492a3d106cdc49f9000fcca914af4aed5855230552d3cf"
	I1202 15:17:51.045888  280389 cri.go:89] found id: "ed6c258d3fc965340e6765fb91d648d86cbd0ec27ffbf12d3f7f75dc84c42fe2"
	I1202 15:17:51.045895  280389 cri.go:89] found id: "62cac40636a5b133526a2d722e6709f00017736dd1cfc7e3133b26af70363e46"
	I1202 15:17:51.045899  280389 cri.go:89] found id: "0d87471c625dcf0f8df21886299158a0c0d136ac58fb47ca1dfde4ddef6434ad"
	I1202 15:17:51.045903  280389 cri.go:89] found id: "ad0c6b77e41e3a16f35df40501b8e69476519899dc85d64c8b2cf07c30b31ce4"
	I1202 15:17:51.045908  280389 cri.go:89] found id: "8ae5e65fa7abba6b7bd24be1ea23cf338ad905d35d5835b7da18a1374d635911"
	I1202 15:17:51.045915  280389 cri.go:89] found id: "d4ee4d2470fd104c16bcc7fe722d5ffc59c1d0a056ffed4d4587e05f1855bf93"
	I1202 15:17:51.045920  280389 cri.go:89] found id: "762c736ec2bae28c970f9e38d7f5c0753e1d54378a7fa586b85636f18b0e547e"
	I1202 15:17:51.045929  280389 cri.go:89] found id: "2ad3385ae6c4074333e7cd6e406cceedb26093169a52ca39f8c4e0168ed2a9eb"
	I1202 15:17:51.045935  280389 cri.go:89] found id: ""
	I1202 15:17:51.045984  280389 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 15:17:51.064156  280389 out.go:203] 
	W1202 15:17:51.065693  280389 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T15:17:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T15:17:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 15:17:51.065773  280389 out.go:285] * 
	* 
	W1202 15:17:51.070599  280389 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 15:17:51.072151  280389 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-141726 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.37s)

                                                
                                    
x
+
TestAddons/parallel/CSI (33.2s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1202 15:17:46.712649  268099 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1202 15:17:46.715941  268099 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1202 15:17:46.715963  268099 kapi.go:107] duration metric: took 3.352868ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.363698ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-141726 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-141726 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-141726 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-141726 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-141726 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-141726 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-141726 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-141726 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [3578d8c3-3a0b-4023-9bf8-ee695daba394] Pending
helpers_test.go:352: "task-pv-pod" [3578d8c3-3a0b-4023-9bf8-ee695daba394] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [3578d8c3-3a0b-4023-9bf8-ee695daba394] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.003610081s
addons_test.go:572: (dbg) Run:  kubectl --context addons-141726 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-141726 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-141726 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-141726 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-141726 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-141726 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-141726 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-141726 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-141726 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-141726 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-141726 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-141726 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-141726 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [31fe3732-d286-421b-ae0b-4c742ba67e88] Pending
helpers_test.go:352: "task-pv-pod-restore" [31fe3732-d286-421b-ae0b-4c742ba67e88] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [31fe3732-d286-421b-ae0b-4c742ba67e88] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004316663s
addons_test.go:614: (dbg) Run:  kubectl --context addons-141726 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-141726 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-141726 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-141726 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-141726 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (268.568115ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 15:18:19.442114  282189 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:18:19.442207  282189 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:18:19.442214  282189 out.go:374] Setting ErrFile to fd 2...
	I1202 15:18:19.442219  282189 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:18:19.442517  282189 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 15:18:19.442803  282189 mustload.go:66] Loading cluster: addons-141726
	I1202 15:18:19.443166  282189 config.go:182] Loaded profile config "addons-141726": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 15:18:19.443194  282189 addons.go:622] checking whether the cluster is paused
	I1202 15:18:19.443294  282189 config.go:182] Loaded profile config "addons-141726": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 15:18:19.443313  282189 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:18:19.443727  282189 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:18:19.464850  282189 ssh_runner.go:195] Run: systemctl --version
	I1202 15:18:19.464918  282189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:18:19.486811  282189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:18:19.588761  282189 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 15:18:19.588842  282189 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 15:18:19.620944  282189 cri.go:89] found id: "5412cbcb9dad23c931f90a92d80b1e256500b274b20fcff6807ec93ce486087b"
	I1202 15:18:19.620978  282189 cri.go:89] found id: "584adcb3687a901fe5060be0a1fb1600c34509b47bb4484486d6cd5d48c6ffad"
	I1202 15:18:19.620982  282189 cri.go:89] found id: "00fe2f1035e365363b059dcce9f3e0b81a3ad886b8c75acee254b56191e6e863"
	I1202 15:18:19.620986  282189 cri.go:89] found id: "c829c27b2be0ca5be71ec02d7cf4e0c49251f9e60910a0e3454bbcddca4fafcd"
	I1202 15:18:19.620990  282189 cri.go:89] found id: "3307ae3898cde405eec22b3674ace2b05333d6bbcffefc76a3d7305a9043c2e4"
	I1202 15:18:19.620994  282189 cri.go:89] found id: "874bcd460b4cc02e9dad7f167a328196ff5beb9a1012aaf8f5f4f2a393d906b8"
	I1202 15:18:19.620997  282189 cri.go:89] found id: "a087fdb2d51ae5fccdb5086d81846803a6e8c037c736644f24867541b19508e7"
	I1202 15:18:19.621000  282189 cri.go:89] found id: "44579fbc5bf765a529aaedb60d661600810b558ed14b53a2791275b051e41cea"
	I1202 15:18:19.621003  282189 cri.go:89] found id: "398f34ffd447f6b12322f761f0de9ff3970accc3446665dd1322ad321ade5e55"
	I1202 15:18:19.621018  282189 cri.go:89] found id: "51d82f122a46867d50393b13527c7d9616831cbadd2d989162e47bf6ff9995bf"
	I1202 15:18:19.621021  282189 cri.go:89] found id: "1433ec009789d9dece3a29309283858693fe9891c2c0deadf76da3ccde6e4d3a"
	I1202 15:18:19.621024  282189 cri.go:89] found id: "f7167c650b5e128373caeda86412c6686ea43f9fca94eebbea1e6330f1681df7"
	I1202 15:18:19.621026  282189 cri.go:89] found id: "ba208fa7fba7a8aa77172696fd226f8981ccd6cb050eebd1eaf36ffd634ae40b"
	I1202 15:18:19.621030  282189 cri.go:89] found id: "515f2711c25082029bfb73f256ab315837df06ad3ce2e28c43e6ca4f915ff98a"
	I1202 15:18:19.621032  282189 cri.go:89] found id: "db056cf136978dcdc941d292176e5dd0ae726b09d2af66bd9cfe26cd49867515"
	I1202 15:18:19.621039  282189 cri.go:89] found id: "8aef252bd37ce68223492a3d106cdc49f9000fcca914af4aed5855230552d3cf"
	I1202 15:18:19.621044  282189 cri.go:89] found id: "ed6c258d3fc965340e6765fb91d648d86cbd0ec27ffbf12d3f7f75dc84c42fe2"
	I1202 15:18:19.621048  282189 cri.go:89] found id: "62cac40636a5b133526a2d722e6709f00017736dd1cfc7e3133b26af70363e46"
	I1202 15:18:19.621051  282189 cri.go:89] found id: "0d87471c625dcf0f8df21886299158a0c0d136ac58fb47ca1dfde4ddef6434ad"
	I1202 15:18:19.621054  282189 cri.go:89] found id: "ad0c6b77e41e3a16f35df40501b8e69476519899dc85d64c8b2cf07c30b31ce4"
	I1202 15:18:19.621058  282189 cri.go:89] found id: "8ae5e65fa7abba6b7bd24be1ea23cf338ad905d35d5835b7da18a1374d635911"
	I1202 15:18:19.621061  282189 cri.go:89] found id: "d4ee4d2470fd104c16bcc7fe722d5ffc59c1d0a056ffed4d4587e05f1855bf93"
	I1202 15:18:19.621063  282189 cri.go:89] found id: "762c736ec2bae28c970f9e38d7f5c0753e1d54378a7fa586b85636f18b0e547e"
	I1202 15:18:19.621066  282189 cri.go:89] found id: "2ad3385ae6c4074333e7cd6e406cceedb26093169a52ca39f8c4e0168ed2a9eb"
	I1202 15:18:19.621069  282189 cri.go:89] found id: ""
	I1202 15:18:19.621113  282189 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 15:18:19.636452  282189 out.go:203] 
	W1202 15:18:19.637619  282189 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T15:18:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T15:18:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 15:18:19.637643  282189 out.go:285] * 
	* 
	W1202 15:18:19.640789  282189 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 15:18:19.642199  282189 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-141726 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-141726 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-141726 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (262.069679ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 15:18:19.715214  282258 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:18:19.715471  282258 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:18:19.715481  282258 out.go:374] Setting ErrFile to fd 2...
	I1202 15:18:19.715485  282258 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:18:19.715715  282258 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 15:18:19.715980  282258 mustload.go:66] Loading cluster: addons-141726
	I1202 15:18:19.716296  282258 config.go:182] Loaded profile config "addons-141726": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 15:18:19.716313  282258 addons.go:622] checking whether the cluster is paused
	I1202 15:18:19.716395  282258 config.go:182] Loaded profile config "addons-141726": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 15:18:19.716410  282258 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:18:19.716792  282258 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:18:19.735101  282258 ssh_runner.go:195] Run: systemctl --version
	I1202 15:18:19.735179  282258 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:18:19.754139  282258 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:18:19.853397  282258 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 15:18:19.853543  282258 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 15:18:19.883216  282258 cri.go:89] found id: "5412cbcb9dad23c931f90a92d80b1e256500b274b20fcff6807ec93ce486087b"
	I1202 15:18:19.883247  282258 cri.go:89] found id: "584adcb3687a901fe5060be0a1fb1600c34509b47bb4484486d6cd5d48c6ffad"
	I1202 15:18:19.883251  282258 cri.go:89] found id: "00fe2f1035e365363b059dcce9f3e0b81a3ad886b8c75acee254b56191e6e863"
	I1202 15:18:19.883255  282258 cri.go:89] found id: "c829c27b2be0ca5be71ec02d7cf4e0c49251f9e60910a0e3454bbcddca4fafcd"
	I1202 15:18:19.883258  282258 cri.go:89] found id: "3307ae3898cde405eec22b3674ace2b05333d6bbcffefc76a3d7305a9043c2e4"
	I1202 15:18:19.883262  282258 cri.go:89] found id: "874bcd460b4cc02e9dad7f167a328196ff5beb9a1012aaf8f5f4f2a393d906b8"
	I1202 15:18:19.883265  282258 cri.go:89] found id: "a087fdb2d51ae5fccdb5086d81846803a6e8c037c736644f24867541b19508e7"
	I1202 15:18:19.883268  282258 cri.go:89] found id: "44579fbc5bf765a529aaedb60d661600810b558ed14b53a2791275b051e41cea"
	I1202 15:18:19.883271  282258 cri.go:89] found id: "398f34ffd447f6b12322f761f0de9ff3970accc3446665dd1322ad321ade5e55"
	I1202 15:18:19.883282  282258 cri.go:89] found id: "51d82f122a46867d50393b13527c7d9616831cbadd2d989162e47bf6ff9995bf"
	I1202 15:18:19.883285  282258 cri.go:89] found id: "1433ec009789d9dece3a29309283858693fe9891c2c0deadf76da3ccde6e4d3a"
	I1202 15:18:19.883288  282258 cri.go:89] found id: "f7167c650b5e128373caeda86412c6686ea43f9fca94eebbea1e6330f1681df7"
	I1202 15:18:19.883290  282258 cri.go:89] found id: "ba208fa7fba7a8aa77172696fd226f8981ccd6cb050eebd1eaf36ffd634ae40b"
	I1202 15:18:19.883293  282258 cri.go:89] found id: "515f2711c25082029bfb73f256ab315837df06ad3ce2e28c43e6ca4f915ff98a"
	I1202 15:18:19.883296  282258 cri.go:89] found id: "db056cf136978dcdc941d292176e5dd0ae726b09d2af66bd9cfe26cd49867515"
	I1202 15:18:19.883303  282258 cri.go:89] found id: "8aef252bd37ce68223492a3d106cdc49f9000fcca914af4aed5855230552d3cf"
	I1202 15:18:19.883308  282258 cri.go:89] found id: "ed6c258d3fc965340e6765fb91d648d86cbd0ec27ffbf12d3f7f75dc84c42fe2"
	I1202 15:18:19.883313  282258 cri.go:89] found id: "62cac40636a5b133526a2d722e6709f00017736dd1cfc7e3133b26af70363e46"
	I1202 15:18:19.883316  282258 cri.go:89] found id: "0d87471c625dcf0f8df21886299158a0c0d136ac58fb47ca1dfde4ddef6434ad"
	I1202 15:18:19.883318  282258 cri.go:89] found id: "ad0c6b77e41e3a16f35df40501b8e69476519899dc85d64c8b2cf07c30b31ce4"
	I1202 15:18:19.883321  282258 cri.go:89] found id: "8ae5e65fa7abba6b7bd24be1ea23cf338ad905d35d5835b7da18a1374d635911"
	I1202 15:18:19.883324  282258 cri.go:89] found id: "d4ee4d2470fd104c16bcc7fe722d5ffc59c1d0a056ffed4d4587e05f1855bf93"
	I1202 15:18:19.883326  282258 cri.go:89] found id: "762c736ec2bae28c970f9e38d7f5c0753e1d54378a7fa586b85636f18b0e547e"
	I1202 15:18:19.883329  282258 cri.go:89] found id: "2ad3385ae6c4074333e7cd6e406cceedb26093169a52ca39f8c4e0168ed2a9eb"
	I1202 15:18:19.883332  282258 cri.go:89] found id: ""
	I1202 15:18:19.883381  282258 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 15:18:19.899551  282258 out.go:203] 
	W1202 15:18:19.900858  282258 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T15:18:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T15:18:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 15:18:19.900908  282258 out.go:285] * 
	* 
	W1202 15:18:19.904224  282258 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 15:18:19.905548  282258 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-141726 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (33.20s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-141726 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-141726 --alsologtostderr -v=1: exit status 11 (253.699246ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 15:17:35.243575  278037 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:17:35.243714  278037 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:17:35.243724  278037 out.go:374] Setting ErrFile to fd 2...
	I1202 15:17:35.243729  278037 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:17:35.243923  278037 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 15:17:35.244156  278037 mustload.go:66] Loading cluster: addons-141726
	I1202 15:17:35.244526  278037 config.go:182] Loaded profile config "addons-141726": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 15:17:35.244554  278037 addons.go:622] checking whether the cluster is paused
	I1202 15:17:35.244648  278037 config.go:182] Loaded profile config "addons-141726": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 15:17:35.244665  278037 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:17:35.245081  278037 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:17:35.262899  278037 ssh_runner.go:195] Run: systemctl --version
	I1202 15:17:35.262960  278037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:17:35.281494  278037 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:17:35.380463  278037 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 15:17:35.380549  278037 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 15:17:35.411758  278037 cri.go:89] found id: "5412cbcb9dad23c931f90a92d80b1e256500b274b20fcff6807ec93ce486087b"
	I1202 15:17:35.411795  278037 cri.go:89] found id: "584adcb3687a901fe5060be0a1fb1600c34509b47bb4484486d6cd5d48c6ffad"
	I1202 15:17:35.411800  278037 cri.go:89] found id: "00fe2f1035e365363b059dcce9f3e0b81a3ad886b8c75acee254b56191e6e863"
	I1202 15:17:35.411804  278037 cri.go:89] found id: "c829c27b2be0ca5be71ec02d7cf4e0c49251f9e60910a0e3454bbcddca4fafcd"
	I1202 15:17:35.411807  278037 cri.go:89] found id: "3307ae3898cde405eec22b3674ace2b05333d6bbcffefc76a3d7305a9043c2e4"
	I1202 15:17:35.411813  278037 cri.go:89] found id: "874bcd460b4cc02e9dad7f167a328196ff5beb9a1012aaf8f5f4f2a393d906b8"
	I1202 15:17:35.411817  278037 cri.go:89] found id: "a087fdb2d51ae5fccdb5086d81846803a6e8c037c736644f24867541b19508e7"
	I1202 15:17:35.411822  278037 cri.go:89] found id: "44579fbc5bf765a529aaedb60d661600810b558ed14b53a2791275b051e41cea"
	I1202 15:17:35.411826  278037 cri.go:89] found id: "398f34ffd447f6b12322f761f0de9ff3970accc3446665dd1322ad321ade5e55"
	I1202 15:17:35.411851  278037 cri.go:89] found id: "51d82f122a46867d50393b13527c7d9616831cbadd2d989162e47bf6ff9995bf"
	I1202 15:17:35.411859  278037 cri.go:89] found id: "1433ec009789d9dece3a29309283858693fe9891c2c0deadf76da3ccde6e4d3a"
	I1202 15:17:35.411864  278037 cri.go:89] found id: "f7167c650b5e128373caeda86412c6686ea43f9fca94eebbea1e6330f1681df7"
	I1202 15:17:35.411869  278037 cri.go:89] found id: "ba208fa7fba7a8aa77172696fd226f8981ccd6cb050eebd1eaf36ffd634ae40b"
	I1202 15:17:35.411876  278037 cri.go:89] found id: "515f2711c25082029bfb73f256ab315837df06ad3ce2e28c43e6ca4f915ff98a"
	I1202 15:17:35.411880  278037 cri.go:89] found id: "db056cf136978dcdc941d292176e5dd0ae726b09d2af66bd9cfe26cd49867515"
	I1202 15:17:35.411897  278037 cri.go:89] found id: "8aef252bd37ce68223492a3d106cdc49f9000fcca914af4aed5855230552d3cf"
	I1202 15:17:35.411908  278037 cri.go:89] found id: "ed6c258d3fc965340e6765fb91d648d86cbd0ec27ffbf12d3f7f75dc84c42fe2"
	I1202 15:17:35.411916  278037 cri.go:89] found id: "62cac40636a5b133526a2d722e6709f00017736dd1cfc7e3133b26af70363e46"
	I1202 15:17:35.411920  278037 cri.go:89] found id: "0d87471c625dcf0f8df21886299158a0c0d136ac58fb47ca1dfde4ddef6434ad"
	I1202 15:17:35.411924  278037 cri.go:89] found id: "ad0c6b77e41e3a16f35df40501b8e69476519899dc85d64c8b2cf07c30b31ce4"
	I1202 15:17:35.411932  278037 cri.go:89] found id: "8ae5e65fa7abba6b7bd24be1ea23cf338ad905d35d5835b7da18a1374d635911"
	I1202 15:17:35.411940  278037 cri.go:89] found id: "d4ee4d2470fd104c16bcc7fe722d5ffc59c1d0a056ffed4d4587e05f1855bf93"
	I1202 15:17:35.411951  278037 cri.go:89] found id: "762c736ec2bae28c970f9e38d7f5c0753e1d54378a7fa586b85636f18b0e547e"
	I1202 15:17:35.411959  278037 cri.go:89] found id: "2ad3385ae6c4074333e7cd6e406cceedb26093169a52ca39f8c4e0168ed2a9eb"
	I1202 15:17:35.411964  278037 cri.go:89] found id: ""
	I1202 15:17:35.412032  278037 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 15:17:35.426895  278037 out.go:203] 
	W1202 15:17:35.428186  278037 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T15:17:35Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T15:17:35Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 15:17:35.428208  278037 out.go:285] * 
	* 
	W1202 15:17:35.431378  278037 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 15:17:35.432608  278037 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-141726 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-141726
helpers_test.go:243: (dbg) docker inspect addons-141726:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "128a4d3a45efb39bab8b4a2d7a8e688362fd72fe81f4a04c608da9f1a4dcb058",
	        "Created": "2025-12-02T15:16:04.050874973Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 270528,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T15:16:04.091838148Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/128a4d3a45efb39bab8b4a2d7a8e688362fd72fe81f4a04c608da9f1a4dcb058/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/128a4d3a45efb39bab8b4a2d7a8e688362fd72fe81f4a04c608da9f1a4dcb058/hostname",
	        "HostsPath": "/var/lib/docker/containers/128a4d3a45efb39bab8b4a2d7a8e688362fd72fe81f4a04c608da9f1a4dcb058/hosts",
	        "LogPath": "/var/lib/docker/containers/128a4d3a45efb39bab8b4a2d7a8e688362fd72fe81f4a04c608da9f1a4dcb058/128a4d3a45efb39bab8b4a2d7a8e688362fd72fe81f4a04c608da9f1a4dcb058-json.log",
	        "Name": "/addons-141726",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-141726:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-141726",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "128a4d3a45efb39bab8b4a2d7a8e688362fd72fe81f4a04c608da9f1a4dcb058",
	                "LowerDir": "/var/lib/docker/overlay2/58bbd9985dadadf3d6595010c73fc5198a3bfe6d0d3000d27fa89fa52c5738c5-init/diff:/var/lib/docker/overlay2/ab98578cee54140c21ba2edb7c02601b9799fbaa027f05ce4daaae66d198c082/diff",
	                "MergedDir": "/var/lib/docker/overlay2/58bbd9985dadadf3d6595010c73fc5198a3bfe6d0d3000d27fa89fa52c5738c5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/58bbd9985dadadf3d6595010c73fc5198a3bfe6d0d3000d27fa89fa52c5738c5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/58bbd9985dadadf3d6595010c73fc5198a3bfe6d0d3000d27fa89fa52c5738c5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-141726",
	                "Source": "/var/lib/docker/volumes/addons-141726/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-141726",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-141726",
	                "name.minikube.sigs.k8s.io": "addons-141726",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "79bbe823923829093b02d1fcb315c9f6d3c1fd95b694701f6715b7dd48ef5778",
	            "SandboxKey": "/var/run/docker/netns/79bbe8239238",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32888"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32889"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32892"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32890"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32891"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-141726": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e41e36b0a1e82b35432a130dea23fec5397aed2b06197e08a06740fba19835d3",
	                    "EndpointID": "7339b321bd5eb17037d4cb7c4aaf082ed99117a37312af2842c7b3d314c98b7d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "be:c8:fa:e1:8b:2b",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-141726",
	                        "128a4d3a45ef"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-141726 -n addons-141726
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-141726 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-141726 logs -n 25: (1.153390745s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-794731 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-794731   │ jenkins │ v1.37.0 │ 02 Dec 25 15:15 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 02 Dec 25 15:15 UTC │ 02 Dec 25 15:15 UTC │
	│ delete  │ -p download-only-794731                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-794731   │ jenkins │ v1.37.0 │ 02 Dec 25 15:15 UTC │ 02 Dec 25 15:15 UTC │
	│ start   │ -o=json --download-only -p download-only-403279 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-403279   │ jenkins │ v1.37.0 │ 02 Dec 25 15:15 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 02 Dec 25 15:15 UTC │ 02 Dec 25 15:15 UTC │
	│ delete  │ -p download-only-403279                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-403279   │ jenkins │ v1.37.0 │ 02 Dec 25 15:15 UTC │ 02 Dec 25 15:15 UTC │
	│ start   │ -o=json --download-only -p download-only-509172 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                         │ download-only-509172   │ jenkins │ v1.37.0 │ 02 Dec 25 15:15 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 02 Dec 25 15:15 UTC │ 02 Dec 25 15:15 UTC │
	│ delete  │ -p download-only-509172                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-509172   │ jenkins │ v1.37.0 │ 02 Dec 25 15:15 UTC │ 02 Dec 25 15:15 UTC │
	│ delete  │ -p download-only-794731                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-794731   │ jenkins │ v1.37.0 │ 02 Dec 25 15:15 UTC │ 02 Dec 25 15:15 UTC │
	│ delete  │ -p download-only-403279                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-403279   │ jenkins │ v1.37.0 │ 02 Dec 25 15:15 UTC │ 02 Dec 25 15:15 UTC │
	│ delete  │ -p download-only-509172                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-509172   │ jenkins │ v1.37.0 │ 02 Dec 25 15:15 UTC │ 02 Dec 25 15:15 UTC │
	│ start   │ --download-only -p download-docker-358841 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-358841 │ jenkins │ v1.37.0 │ 02 Dec 25 15:15 UTC │                     │
	│ delete  │ -p download-docker-358841                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-358841 │ jenkins │ v1.37.0 │ 02 Dec 25 15:15 UTC │ 02 Dec 25 15:15 UTC │
	│ start   │ --download-only -p binary-mirror-703402 --alsologtostderr --binary-mirror http://127.0.0.1:46397 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-703402   │ jenkins │ v1.37.0 │ 02 Dec 25 15:15 UTC │                     │
	│ delete  │ -p binary-mirror-703402                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-703402   │ jenkins │ v1.37.0 │ 02 Dec 25 15:15 UTC │ 02 Dec 25 15:15 UTC │
	│ addons  │ disable dashboard -p addons-141726                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-141726          │ jenkins │ v1.37.0 │ 02 Dec 25 15:15 UTC │                     │
	│ addons  │ enable dashboard -p addons-141726                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-141726          │ jenkins │ v1.37.0 │ 02 Dec 25 15:15 UTC │                     │
	│ start   │ -p addons-141726 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-141726          │ jenkins │ v1.37.0 │ 02 Dec 25 15:15 UTC │ 02 Dec 25 15:17 UTC │
	│ addons  │ addons-141726 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-141726          │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │                     │
	│ addons  │ addons-141726 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-141726          │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │                     │
	│ addons  │ enable headlamp -p addons-141726 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-141726          │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 15:15:42
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 15:15:42.428066  269889 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:15:42.428361  269889 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:15:42.428373  269889 out.go:374] Setting ErrFile to fd 2...
	I1202 15:15:42.428378  269889 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:15:42.428620  269889 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 15:15:42.429212  269889 out.go:368] Setting JSON to false
	I1202 15:15:42.430229  269889 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7083,"bootTime":1764681459,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 15:15:42.430301  269889 start.go:143] virtualization: kvm guest
	I1202 15:15:42.432355  269889 out.go:179] * [addons-141726] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 15:15:42.433715  269889 notify.go:221] Checking for updates...
	I1202 15:15:42.433727  269889 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 15:15:42.435025  269889 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 15:15:42.436436  269889 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 15:15:42.437579  269889 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-264555/.minikube
	I1202 15:15:42.438837  269889 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 15:15:42.440376  269889 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 15:15:42.441905  269889 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 15:15:42.465312  269889 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 15:15:42.465521  269889 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:15:42.526757  269889 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:46 SystemTime:2025-12-02 15:15:42.516398623 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:15:42.526871  269889 docker.go:319] overlay module found
	I1202 15:15:42.528672  269889 out.go:179] * Using the docker driver based on user configuration
	I1202 15:15:42.529946  269889 start.go:309] selected driver: docker
	I1202 15:15:42.529968  269889 start.go:927] validating driver "docker" against <nil>
	I1202 15:15:42.529987  269889 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 15:15:42.530509  269889 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:15:42.593812  269889 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:46 SystemTime:2025-12-02 15:15:42.583386999 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:15:42.594040  269889 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 15:15:42.594276  269889 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 15:15:42.595892  269889 out.go:179] * Using Docker driver with root privileges
	I1202 15:15:42.596777  269889 cni.go:84] Creating CNI manager for ""
	I1202 15:15:42.596844  269889 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 15:15:42.596857  269889 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 15:15:42.596931  269889 start.go:353] cluster config:
	{Name:addons-141726 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-141726 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1202 15:15:42.598015  269889 out.go:179] * Starting "addons-141726" primary control-plane node in "addons-141726" cluster
	I1202 15:15:42.598965  269889 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 15:15:42.600084  269889 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 15:15:42.601105  269889 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 15:15:42.601148  269889 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22021-264555/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1202 15:15:42.601144  269889 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 15:15:42.601179  269889 cache.go:65] Caching tarball of preloaded images
	I1202 15:15:42.601459  269889 preload.go:238] Found /home/jenkins/minikube-integration/22021-264555/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 15:15:42.601476  269889 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 15:15:42.601820  269889 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/config.json ...
	I1202 15:15:42.601853  269889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/config.json: {Name:mk2f435c1f3622184bd17cd188725050f114eedb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 15:15:42.620229  269889 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b to local cache
	I1202 15:15:42.620356  269889 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory
	I1202 15:15:42.620372  269889 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory, skipping pull
	I1202 15:15:42.620377  269889 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in cache, skipping pull
	I1202 15:15:42.620387  269889 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b as a tarball
	I1202 15:15:42.620392  269889 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b from local cache
	I1202 15:15:56.007134  269889 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b from cached tarball
	I1202 15:15:56.007191  269889 cache.go:243] Successfully downloaded all kic artifacts
	I1202 15:15:56.007244  269889 start.go:360] acquireMachinesLock for addons-141726: {Name:mk4ed9ed1d49aa4c0786fb49dc3ee4a34ea8161e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 15:15:56.007362  269889 start.go:364] duration metric: took 91.547µs to acquireMachinesLock for "addons-141726"
	I1202 15:15:56.007395  269889 start.go:93] Provisioning new machine with config: &{Name:addons-141726 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-141726 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 15:15:56.007509  269889 start.go:125] createHost starting for "" (driver="docker")
	I1202 15:15:56.009323  269889 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1202 15:15:56.009577  269889 start.go:159] libmachine.API.Create for "addons-141726" (driver="docker")
	I1202 15:15:56.009609  269889 client.go:173] LocalClient.Create starting
	I1202 15:15:56.009832  269889 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem
	I1202 15:15:56.059961  269889 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem
	I1202 15:15:56.113136  269889 cli_runner.go:164] Run: docker network inspect addons-141726 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1202 15:15:56.130624  269889 cli_runner.go:211] docker network inspect addons-141726 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1202 15:15:56.130706  269889 network_create.go:284] running [docker network inspect addons-141726] to gather additional debugging logs...
	I1202 15:15:56.130725  269889 cli_runner.go:164] Run: docker network inspect addons-141726
	W1202 15:15:56.147310  269889 cli_runner.go:211] docker network inspect addons-141726 returned with exit code 1
	I1202 15:15:56.147339  269889 network_create.go:287] error running [docker network inspect addons-141726]: docker network inspect addons-141726: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-141726 not found
	I1202 15:15:56.147354  269889 network_create.go:289] output of [docker network inspect addons-141726]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-141726 not found
	
	** /stderr **
	I1202 15:15:56.147479  269889 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 15:15:56.164293  269889 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0014a9700}
	I1202 15:15:56.164346  269889 network_create.go:124] attempt to create docker network addons-141726 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1202 15:15:56.164393  269889 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-141726 addons-141726
	I1202 15:15:56.213302  269889 network_create.go:108] docker network addons-141726 192.168.49.0/24 created
	I1202 15:15:56.213341  269889 kic.go:121] calculated static IP "192.168.49.2" for the "addons-141726" container
	I1202 15:15:56.213413  269889 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1202 15:15:56.229680  269889 cli_runner.go:164] Run: docker volume create addons-141726 --label name.minikube.sigs.k8s.io=addons-141726 --label created_by.minikube.sigs.k8s.io=true
	I1202 15:15:56.247956  269889 oci.go:103] Successfully created a docker volume addons-141726
	I1202 15:15:56.248062  269889 cli_runner.go:164] Run: docker run --rm --name addons-141726-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-141726 --entrypoint /usr/bin/test -v addons-141726:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1202 15:16:00.197128  269889 cli_runner.go:217] Completed: docker run --rm --name addons-141726-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-141726 --entrypoint /usr/bin/test -v addons-141726:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib: (3.949013122s)
	I1202 15:16:00.197164  269889 oci.go:107] Successfully prepared a docker volume addons-141726
	I1202 15:16:00.197202  269889 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 15:16:00.197215  269889 kic.go:194] Starting extracting preloaded images to volume ...
	I1202 15:16:00.197270  269889 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22021-264555/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-141726:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	I1202 15:16:03.974465  269889 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22021-264555/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-141726:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (3.777130036s)
	I1202 15:16:03.974496  269889 kic.go:203] duration metric: took 3.777278351s to extract preloaded images to volume ...
	W1202 15:16:03.974594  269889 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1202 15:16:03.974635  269889 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1202 15:16:03.974709  269889 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1202 15:16:04.034948  269889 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-141726 --name addons-141726 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-141726 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-141726 --network addons-141726 --ip 192.168.49.2 --volume addons-141726:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1202 15:16:04.312594  269889 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Running}}
	I1202 15:16:04.330854  269889 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:16:04.349814  269889 cli_runner.go:164] Run: docker exec addons-141726 stat /var/lib/dpkg/alternatives/iptables
	I1202 15:16:04.404810  269889 oci.go:144] the created container "addons-141726" has a running status.
	I1202 15:16:04.404842  269889 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa...
	I1202 15:16:04.486749  269889 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1202 15:16:04.510326  269889 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:16:04.529996  269889 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1202 15:16:04.530022  269889 kic_runner.go:114] Args: [docker exec --privileged addons-141726 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1202 15:16:04.589083  269889 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:16:04.614601  269889 machine.go:94] provisionDockerMachine start ...
	I1202 15:16:04.614710  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:04.639186  269889 main.go:143] libmachine: Using SSH client type: native
	I1202 15:16:04.639921  269889 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1202 15:16:04.639946  269889 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 15:16:04.640717  269889 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45500->127.0.0.1:32888: read: connection reset by peer
	I1202 15:16:07.780766  269889 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-141726
	
	I1202 15:16:07.780795  269889 ubuntu.go:182] provisioning hostname "addons-141726"
	I1202 15:16:07.780862  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:07.799055  269889 main.go:143] libmachine: Using SSH client type: native
	I1202 15:16:07.799306  269889 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1202 15:16:07.799322  269889 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-141726 && echo "addons-141726" | sudo tee /etc/hostname
	I1202 15:16:07.949586  269889 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-141726
	
	I1202 15:16:07.949678  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:07.970881  269889 main.go:143] libmachine: Using SSH client type: native
	I1202 15:16:07.971087  269889 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1202 15:16:07.971102  269889 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-141726' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-141726/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-141726' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 15:16:08.112359  269889 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 15:16:08.112394  269889 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-264555/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-264555/.minikube}
	I1202 15:16:08.112461  269889 ubuntu.go:190] setting up certificates
	I1202 15:16:08.112476  269889 provision.go:84] configureAuth start
	I1202 15:16:08.112537  269889 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-141726
	I1202 15:16:08.130395  269889 provision.go:143] copyHostCerts
	I1202 15:16:08.130501  269889 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem (1082 bytes)
	I1202 15:16:08.130639  269889 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem (1123 bytes)
	I1202 15:16:08.130699  269889 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem (1675 bytes)
	I1202 15:16:08.130752  269889 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem org=jenkins.addons-141726 san=[127.0.0.1 192.168.49.2 addons-141726 localhost minikube]
	I1202 15:16:08.211091  269889 provision.go:177] copyRemoteCerts
	I1202 15:16:08.211154  269889 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 15:16:08.211186  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:08.229442  269889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:16:08.329792  269889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 15:16:08.349287  269889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1202 15:16:08.366869  269889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 15:16:08.384943  269889 provision.go:87] duration metric: took 272.449321ms to configureAuth
	I1202 15:16:08.384989  269889 ubuntu.go:206] setting minikube options for container-runtime
	I1202 15:16:08.385178  269889 config.go:182] Loaded profile config "addons-141726": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 15:16:08.385297  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:08.403552  269889 main.go:143] libmachine: Using SSH client type: native
	I1202 15:16:08.403764  269889 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1202 15:16:08.403779  269889 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 15:16:08.688245  269889 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 15:16:08.688267  269889 machine.go:97] duration metric: took 4.073634043s to provisionDockerMachine
	I1202 15:16:08.688279  269889 client.go:176] duration metric: took 12.678663098s to LocalClient.Create
	I1202 15:16:08.688302  269889 start.go:167] duration metric: took 12.6787275s to libmachine.API.Create "addons-141726"
	I1202 15:16:08.688312  269889 start.go:293] postStartSetup for "addons-141726" (driver="docker")
	I1202 15:16:08.688324  269889 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 15:16:08.688380  269889 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 15:16:08.688466  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:08.705418  269889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:16:08.806557  269889 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 15:16:08.810168  269889 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 15:16:08.810202  269889 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 15:16:08.810215  269889 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-264555/.minikube/addons for local assets ...
	I1202 15:16:08.810277  269889 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-264555/.minikube/files for local assets ...
	I1202 15:16:08.810305  269889 start.go:296] duration metric: took 121.985443ms for postStartSetup
	I1202 15:16:08.810594  269889 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-141726
	I1202 15:16:08.828030  269889 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/config.json ...
	I1202 15:16:08.828330  269889 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 15:16:08.828383  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:08.845605  269889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:16:08.941665  269889 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 15:16:08.946334  269889 start.go:128] duration metric: took 12.938805567s to createHost
	I1202 15:16:08.946363  269889 start.go:83] releasing machines lock for "addons-141726", held for 12.938985615s
	I1202 15:16:08.946447  269889 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-141726
	I1202 15:16:08.963736  269889 ssh_runner.go:195] Run: cat /version.json
	I1202 15:16:08.963795  269889 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 15:16:08.963818  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:08.963875  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:08.981958  269889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:16:08.982304  269889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:16:09.133783  269889 ssh_runner.go:195] Run: systemctl --version
	I1202 15:16:09.140160  269889 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 15:16:09.173773  269889 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 15:16:09.178307  269889 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 15:16:09.178381  269889 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 15:16:09.203987  269889 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1202 15:16:09.204018  269889 start.go:496] detecting cgroup driver to use...
	I1202 15:16:09.204060  269889 detect.go:190] detected "systemd" cgroup driver on host os
	I1202 15:16:09.204113  269889 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 15:16:09.219342  269889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 15:16:09.231095  269889 docker.go:218] disabling cri-docker service (if available) ...
	I1202 15:16:09.231171  269889 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 15:16:09.247618  269889 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 15:16:09.264259  269889 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 15:16:09.342378  269889 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 15:16:09.425692  269889 docker.go:234] disabling docker service ...
	I1202 15:16:09.425769  269889 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 15:16:09.444094  269889 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 15:16:09.456339  269889 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 15:16:09.534038  269889 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 15:16:09.615344  269889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 15:16:09.627704  269889 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 15:16:09.641748  269889 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 15:16:09.641813  269889 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 15:16:09.651822  269889 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1202 15:16:09.651904  269889 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 15:16:09.660790  269889 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 15:16:09.669710  269889 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 15:16:09.678305  269889 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 15:16:09.686257  269889 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 15:16:09.694707  269889 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 15:16:09.707857  269889 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 15:16:09.716595  269889 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 15:16:09.723878  269889 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 15:16:09.731123  269889 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 15:16:09.809831  269889 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 15:16:09.942488  269889 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 15:16:09.942578  269889 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 15:16:09.946778  269889 start.go:564] Will wait 60s for crictl version
	I1202 15:16:09.946831  269889 ssh_runner.go:195] Run: which crictl
	I1202 15:16:09.950567  269889 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 15:16:09.975040  269889 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 15:16:09.975141  269889 ssh_runner.go:195] Run: crio --version
	I1202 15:16:10.003092  269889 ssh_runner.go:195] Run: crio --version
	I1202 15:16:10.031881  269889 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 15:16:10.033172  269889 cli_runner.go:164] Run: docker network inspect addons-141726 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 15:16:10.051017  269889 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 15:16:10.055129  269889 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 15:16:10.065608  269889 kubeadm.go:884] updating cluster {Name:addons-141726 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-141726 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 15:16:10.065720  269889 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 15:16:10.065771  269889 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 15:16:10.096631  269889 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 15:16:10.096652  269889 crio.go:433] Images already preloaded, skipping extraction
	I1202 15:16:10.096700  269889 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 15:16:10.121994  269889 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 15:16:10.122014  269889 cache_images.go:86] Images are preloaded, skipping loading
	I1202 15:16:10.122022  269889 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1202 15:16:10.122136  269889 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-141726 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-141726 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 15:16:10.122213  269889 ssh_runner.go:195] Run: crio config
	I1202 15:16:10.166938  269889 cni.go:84] Creating CNI manager for ""
	I1202 15:16:10.166959  269889 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 15:16:10.166984  269889 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 15:16:10.167014  269889 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-141726 NodeName:addons-141726 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 15:16:10.167165  269889 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-141726"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 15:16:10.167245  269889 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 15:16:10.176528  269889 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 15:16:10.176597  269889 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 15:16:10.185481  269889 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1202 15:16:10.198193  269889 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 15:16:10.213766  269889 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1202 15:16:10.226652  269889 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1202 15:16:10.230447  269889 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 15:16:10.240927  269889 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 15:16:10.327552  269889 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 15:16:10.350931  269889 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726 for IP: 192.168.49.2
	I1202 15:16:10.350961  269889 certs.go:195] generating shared ca certs ...
	I1202 15:16:10.350983  269889 certs.go:227] acquiring lock for ca certs: {Name:mk039ff27816ff98157f54038cc23b17e408fc34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 15:16:10.351126  269889 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-264555/.minikube/ca.key
	I1202 15:16:10.434776  269889 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt ...
	I1202 15:16:10.434809  269889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt: {Name:mk7e072649a4b4c569a833f8cebcc046fa9ba225 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 15:16:10.434995  269889 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-264555/.minikube/ca.key ...
	I1202 15:16:10.435007  269889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/ca.key: {Name:mkd33308f48f06be4f494f9449310e44e1344a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 15:16:10.435093  269889 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.key
	I1202 15:16:10.521841  269889 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.crt ...
	I1202 15:16:10.521874  269889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.crt: {Name:mk8e36d0ab1ab4663173c4b721b0d09b33ed1a71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 15:16:10.522045  269889 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.key ...
	I1202 15:16:10.522056  269889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.key: {Name:mkc7e1042aa3770527969456bd36137ed55e29d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 15:16:10.522132  269889 certs.go:257] generating profile certs ...
	I1202 15:16:10.522193  269889 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/client.key
	I1202 15:16:10.522212  269889 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/client.crt with IP's: []
	I1202 15:16:10.640012  269889 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/client.crt ...
	I1202 15:16:10.640043  269889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/client.crt: {Name:mk59530b14c997590b1fec6c9d583f6576bd969a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 15:16:10.640211  269889 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/client.key ...
	I1202 15:16:10.640222  269889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/client.key: {Name:mk4ef8aa8a6edc1eef7da9e6cf38f0ff677d947e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 15:16:10.640294  269889 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/apiserver.key.875445ec
	I1202 15:16:10.640314  269889 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/apiserver.crt.875445ec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1202 15:16:10.784072  269889 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/apiserver.crt.875445ec ...
	I1202 15:16:10.784101  269889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/apiserver.crt.875445ec: {Name:mk5441da37da4bbc8e91e551854ac1e8a407c404 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 15:16:10.784323  269889 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/apiserver.key.875445ec ...
	I1202 15:16:10.784347  269889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/apiserver.key.875445ec: {Name:mk0bf01907c5401f83f3a079e735493d65e19e61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 15:16:10.784476  269889 certs.go:382] copying /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/apiserver.crt.875445ec -> /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/apiserver.crt
	I1202 15:16:10.784579  269889 certs.go:386] copying /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/apiserver.key.875445ec -> /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/apiserver.key
	I1202 15:16:10.784652  269889 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/proxy-client.key
	I1202 15:16:10.784673  269889 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/proxy-client.crt with IP's: []
	I1202 15:16:10.885306  269889 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/proxy-client.crt ...
	I1202 15:16:10.885338  269889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/proxy-client.crt: {Name:mkd8d30684698f4678aeb27ef0d90c15b8ca24ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 15:16:10.885574  269889 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/proxy-client.key ...
	I1202 15:16:10.885595  269889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/proxy-client.key: {Name:mkfe613656e416c1b4f650e11394388c60c12cb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 15:16:10.885841  269889 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem (1679 bytes)
	I1202 15:16:10.885893  269889 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem (1082 bytes)
	I1202 15:16:10.885936  269889 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem (1123 bytes)
	I1202 15:16:10.885969  269889 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem (1675 bytes)
	I1202 15:16:10.886631  269889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 15:16:10.904702  269889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 15:16:10.922395  269889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 15:16:10.940727  269889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 15:16:10.960205  269889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1202 15:16:10.978600  269889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 15:16:10.995821  269889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 15:16:11.012659  269889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 15:16:11.029955  269889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 15:16:11.051449  269889 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 15:16:11.063882  269889 ssh_runner.go:195] Run: openssl version
	I1202 15:16:11.070008  269889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 15:16:11.080952  269889 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 15:16:11.084618  269889 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 15:16 /usr/share/ca-certificates/minikubeCA.pem
	I1202 15:16:11.084681  269889 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 15:16:11.118315  269889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 15:16:11.127021  269889 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 15:16:11.130652  269889 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 15:16:11.130715  269889 kubeadm.go:401] StartCluster: {Name:addons-141726 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-141726 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 15:16:11.130794  269889 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 15:16:11.130926  269889 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 15:16:11.157006  269889 cri.go:89] found id: ""
	I1202 15:16:11.157071  269889 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 15:16:11.164986  269889 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 15:16:11.172945  269889 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 15:16:11.173040  269889 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 15:16:11.180871  269889 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 15:16:11.180902  269889 kubeadm.go:158] found existing configuration files:
	
	I1202 15:16:11.180944  269889 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 15:16:11.188664  269889 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 15:16:11.188712  269889 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 15:16:11.196461  269889 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 15:16:11.204368  269889 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 15:16:11.204471  269889 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 15:16:11.211941  269889 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 15:16:11.219342  269889 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 15:16:11.219410  269889 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 15:16:11.226584  269889 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 15:16:11.234248  269889 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 15:16:11.234307  269889 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 15:16:11.241835  269889 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 15:16:11.286140  269889 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1202 15:16:11.286223  269889 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 15:16:11.307924  269889 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 15:16:11.307997  269889 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1202 15:16:11.308071  269889 kubeadm.go:319] OS: Linux
	I1202 15:16:11.308137  269889 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 15:16:11.308197  269889 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 15:16:11.308248  269889 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 15:16:11.308327  269889 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 15:16:11.308390  269889 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 15:16:11.308476  269889 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 15:16:11.308563  269889 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 15:16:11.308627  269889 kubeadm.go:319] CGROUPS_IO: enabled
	I1202 15:16:11.363016  269889 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 15:16:11.363157  269889 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 15:16:11.363319  269889 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 15:16:11.371299  269889 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 15:16:11.373444  269889 out.go:252]   - Generating certificates and keys ...
	I1202 15:16:11.373521  269889 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 15:16:11.373643  269889 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 15:16:11.669716  269889 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1202 15:16:11.748093  269889 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1202 15:16:11.958284  269889 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1202 15:16:12.137902  269889 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1202 15:16:12.322477  269889 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1202 15:16:12.322595  269889 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-141726 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1202 15:16:12.541408  269889 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1202 15:16:12.541588  269889 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-141726 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1202 15:16:12.878840  269889 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1202 15:16:13.043898  269889 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1202 15:16:13.276375  269889 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1202 15:16:13.276480  269889 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 15:16:13.428903  269889 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 15:16:13.520532  269889 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 15:16:13.637332  269889 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 15:16:13.858478  269889 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 15:16:14.150603  269889 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 15:16:14.150979  269889 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 15:16:14.155559  269889 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 15:16:14.157329  269889 out.go:252]   - Booting up control plane ...
	I1202 15:16:14.157415  269889 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 15:16:14.157506  269889 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 15:16:14.158097  269889 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 15:16:14.186867  269889 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 15:16:14.187021  269889 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 15:16:14.194003  269889 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 15:16:14.194203  269889 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 15:16:14.194257  269889 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 15:16:14.292102  269889 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 15:16:14.292230  269889 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 15:16:15.293712  269889 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001627687s
	I1202 15:16:15.297823  269889 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1202 15:16:15.297981  269889 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1202 15:16:15.298284  269889 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1202 15:16:15.298444  269889 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1202 15:16:16.393313  269889 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.095371752s
	I1202 15:16:17.126996  269889 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.829195864s
	I1202 15:16:18.799530  269889 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501617286s
	I1202 15:16:18.815844  269889 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1202 15:16:18.825209  269889 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1202 15:16:18.833668  269889 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1202 15:16:18.833966  269889 kubeadm.go:319] [mark-control-plane] Marking the node addons-141726 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1202 15:16:18.841085  269889 kubeadm.go:319] [bootstrap-token] Using token: 194opl.hhk7qv810vcwb7dj
	I1202 15:16:18.842488  269889 out.go:252]   - Configuring RBAC rules ...
	I1202 15:16:18.842652  269889 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1202 15:16:18.846110  269889 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1202 15:16:18.853024  269889 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1202 15:16:18.855488  269889 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1202 15:16:18.857997  269889 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1202 15:16:18.860085  269889 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1202 15:16:19.204981  269889 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1202 15:16:19.620725  269889 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1202 15:16:20.205212  269889 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1202 15:16:20.206173  269889 kubeadm.go:319] 
	I1202 15:16:20.206288  269889 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1202 15:16:20.206322  269889 kubeadm.go:319] 
	I1202 15:16:20.206458  269889 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1202 15:16:20.206468  269889 kubeadm.go:319] 
	I1202 15:16:20.206501  269889 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1202 15:16:20.206588  269889 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1202 15:16:20.206673  269889 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1202 15:16:20.206689  269889 kubeadm.go:319] 
	I1202 15:16:20.206768  269889 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1202 15:16:20.206782  269889 kubeadm.go:319] 
	I1202 15:16:20.206959  269889 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1202 15:16:20.206980  269889 kubeadm.go:319] 
	I1202 15:16:20.207047  269889 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1202 15:16:20.207121  269889 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1202 15:16:20.207177  269889 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1202 15:16:20.207189  269889 kubeadm.go:319] 
	I1202 15:16:20.207310  269889 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1202 15:16:20.207447  269889 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1202 15:16:20.207458  269889 kubeadm.go:319] 
	I1202 15:16:20.207593  269889 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 194opl.hhk7qv810vcwb7dj \
	I1202 15:16:20.207759  269889 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a700026e2fe1634919809d9050f2aa4b3e0ccbee543d4881e1cd695d56e7eef6 \
	I1202 15:16:20.207797  269889 kubeadm.go:319] 	--control-plane 
	I1202 15:16:20.207807  269889 kubeadm.go:319] 
	I1202 15:16:20.207905  269889 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1202 15:16:20.207921  269889 kubeadm.go:319] 
	I1202 15:16:20.207990  269889 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 194opl.hhk7qv810vcwb7dj \
	I1202 15:16:20.208087  269889 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a700026e2fe1634919809d9050f2aa4b3e0ccbee543d4881e1cd695d56e7eef6 
	I1202 15:16:20.209957  269889 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1202 15:16:20.210078  269889 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 15:16:20.210109  269889 cni.go:84] Creating CNI manager for ""
	I1202 15:16:20.210119  269889 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 15:16:20.212727  269889 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1202 15:16:20.214134  269889 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1202 15:16:20.218443  269889 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1202 15:16:20.218464  269889 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1202 15:16:20.231417  269889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1202 15:16:20.436290  269889 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1202 15:16:20.436374  269889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 15:16:20.436397  269889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-141726 minikube.k8s.io/updated_at=2025_12_02T15_16_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689 minikube.k8s.io/name=addons-141726 minikube.k8s.io/primary=true
	I1202 15:16:20.448517  269889 ops.go:34] apiserver oom_adj: -16
	I1202 15:16:20.514034  269889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 15:16:21.014028  269889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 15:16:21.514656  269889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 15:16:22.014917  269889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 15:16:22.514351  269889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 15:16:23.014818  269889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 15:16:23.514831  269889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 15:16:24.014961  269889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 15:16:24.514982  269889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 15:16:24.577263  269889 kubeadm.go:1114] duration metric: took 4.14095823s to wait for elevateKubeSystemPrivileges
	I1202 15:16:24.577307  269889 kubeadm.go:403] duration metric: took 13.446595269s to StartCluster
	I1202 15:16:24.577338  269889 settings.go:142] acquiring lock: {Name:mkb00b5395affa5a80ee09f21cfed53b1afcd59c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 15:16:24.577507  269889 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 15:16:24.577944  269889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/kubeconfig: {Name:mk809d3f43352510256b48d000241cc8ee13f80d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 15:16:24.578121  269889 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1202 15:16:24.578154  269889 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 15:16:24.578210  269889 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1202 15:16:24.578342  269889 addons.go:70] Setting yakd=true in profile "addons-141726"
	I1202 15:16:24.578362  269889 addons.go:239] Setting addon yakd=true in "addons-141726"
	I1202 15:16:24.578359  269889 config.go:182] Loaded profile config "addons-141726": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 15:16:24.578370  269889 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-141726"
	I1202 15:16:24.578384  269889 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-141726"
	I1202 15:16:24.578389  269889 addons.go:70] Setting registry-creds=true in profile "addons-141726"
	I1202 15:16:24.578411  269889 addons.go:70] Setting default-storageclass=true in profile "addons-141726"
	I1202 15:16:24.578407  269889 addons.go:70] Setting volcano=true in profile "addons-141726"
	I1202 15:16:24.578434  269889 addons.go:70] Setting volumesnapshots=true in profile "addons-141726"
	I1202 15:16:24.578437  269889 addons.go:70] Setting gcp-auth=true in profile "addons-141726"
	I1202 15:16:24.578440  269889 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-141726"
	I1202 15:16:24.578407  269889 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-141726"
	I1202 15:16:24.578446  269889 addons.go:239] Setting addon volcano=true in "addons-141726"
	I1202 15:16:24.578450  269889 addons.go:239] Setting addon volumesnapshots=true in "addons-141726"
	I1202 15:16:24.578451  269889 addons.go:70] Setting metrics-server=true in profile "addons-141726"
	I1202 15:16:24.578456  269889 mustload.go:66] Loading cluster: addons-141726
	I1202 15:16:24.578460  269889 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-141726"
	I1202 15:16:24.578465  269889 addons.go:239] Setting addon metrics-server=true in "addons-141726"
	I1202 15:16:24.578467  269889 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:16:24.578497  269889 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:16:24.578507  269889 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:16:24.578609  269889 config.go:182] Loaded profile config "addons-141726": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 15:16:24.578798  269889 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:16:24.578841  269889 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:16:24.578857  269889 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:16:24.578958  269889 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:16:24.578968  269889 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:16:24.578982  269889 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:16:24.579061  269889 addons.go:70] Setting ingress=true in profile "addons-141726"
	I1202 15:16:24.579084  269889 addons.go:239] Setting addon ingress=true in "addons-141726"
	I1202 15:16:24.579124  269889 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:16:24.579221  269889 addons.go:70] Setting ingress-dns=true in profile "addons-141726"
	I1202 15:16:24.579253  269889 addons.go:239] Setting addon ingress-dns=true in "addons-141726"
	I1202 15:16:24.579286  269889 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:16:24.579541  269889 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:16:24.578357  269889 addons.go:70] Setting inspektor-gadget=true in profile "addons-141726"
	I1202 15:16:24.579687  269889 addons.go:239] Setting addon inspektor-gadget=true in "addons-141726"
	I1202 15:16:24.579713  269889 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:16:24.579766  269889 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:16:24.578409  269889 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:16:24.579993  269889 addons.go:70] Setting storage-provisioner=true in profile "addons-141726"
	I1202 15:16:24.580016  269889 addons.go:239] Setting addon storage-provisioner=true in "addons-141726"
	I1202 15:16:24.580046  269889 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:16:24.578437  269889 addons.go:239] Setting addon registry-creds=true in "addons-141726"
	I1202 15:16:24.580086  269889 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:16:24.580292  269889 addons.go:70] Setting cloud-spanner=true in profile "addons-141726"
	I1202 15:16:24.580312  269889 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-141726"
	I1202 15:16:24.580318  269889 addons.go:239] Setting addon cloud-spanner=true in "addons-141726"
	I1202 15:16:24.580343  269889 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:16:24.580356  269889 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-141726"
	I1202 15:16:24.580381  269889 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:16:24.580489  269889 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-141726"
	I1202 15:16:24.580540  269889 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-141726"
	I1202 15:16:24.580572  269889 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:16:24.580593  269889 addons.go:70] Setting registry=true in profile "addons-141726"
	I1202 15:16:24.580623  269889 addons.go:239] Setting addon registry=true in "addons-141726"
	I1202 15:16:24.580650  269889 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:16:24.578407  269889 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:16:24.581211  269889 out.go:179] * Verifying Kubernetes components...
	I1202 15:16:24.582572  269889 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 15:16:24.588090  269889 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:16:24.588632  269889 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:16:24.588675  269889 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:16:24.589131  269889 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:16:24.589235  269889 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:16:24.589797  269889 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:16:24.590557  269889 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:16:24.592648  269889 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:16:24.614077  269889 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:16:24.626534  269889 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:16:24.637448  269889 addons.go:239] Setting addon default-storageclass=true in "addons-141726"
	I1202 15:16:24.637513  269889 host.go:66] Checking if "addons-141726" exists ...
	W1202 15:16:24.638214  269889 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1202 15:16:24.644403  269889 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:16:24.652115  269889 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-141726"
	I1202 15:16:24.652179  269889 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:16:24.652706  269889 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:16:24.666051  269889 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1202 15:16:24.667718  269889 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1202 15:16:24.668984  269889 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1202 15:16:24.669009  269889 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1202 15:16:24.669079  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:24.669530  269889 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1202 15:16:24.670225  269889 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1202 15:16:24.670358  269889 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1202 15:16:24.670371  269889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1202 15:16:24.670508  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:24.671435  269889 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1202 15:16:24.672614  269889 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1202 15:16:24.672773  269889 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1202 15:16:24.672839  269889 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1202 15:16:24.672853  269889 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1202 15:16:24.672944  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:24.673798  269889 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1202 15:16:24.673814  269889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1202 15:16:24.673859  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:24.674833  269889 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1202 15:16:24.675995  269889 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1202 15:16:24.677173  269889 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1202 15:16:24.678232  269889 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1202 15:16:24.682096  269889 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1202 15:16:24.682156  269889 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 15:16:24.683582  269889 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 15:16:24.683603  269889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 15:16:24.683680  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:24.683853  269889 out.go:179]   - Using image docker.io/registry:3.0.0
	I1202 15:16:24.684097  269889 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1202 15:16:24.684126  269889 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1202 15:16:24.685174  269889 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1202 15:16:24.685195  269889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1202 15:16:24.685255  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:24.685469  269889 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1202 15:16:24.685931  269889 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1202 15:16:24.685945  269889 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1202 15:16:24.686004  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:24.686784  269889 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1202 15:16:24.686918  269889 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1202 15:16:24.686952  269889 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1202 15:16:24.687032  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:24.689111  269889 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1202 15:16:24.690730  269889 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1202 15:16:24.690750  269889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1202 15:16:24.690805  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:24.704774  269889 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1202 15:16:24.712401  269889 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1202 15:16:24.715529  269889 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1202 15:16:24.715554  269889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1202 15:16:24.715642  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:24.716448  269889 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1202 15:16:24.716471  269889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1202 15:16:24.716601  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:24.720032  269889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:16:24.722962  269889 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1202 15:16:24.723839  269889 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 15:16:24.723873  269889 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 15:16:24.723936  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:24.725140  269889 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1202 15:16:24.725163  269889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1202 15:16:24.725223  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:24.730820  269889 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1202 15:16:24.732199  269889 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1202 15:16:24.732223  269889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1202 15:16:24.732295  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:24.735667  269889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:16:24.736780  269889 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1202 15:16:24.751814  269889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:16:24.759573  269889 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1202 15:16:24.760866  269889 out.go:179]   - Using image docker.io/busybox:stable
	I1202 15:16:24.762465  269889 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1202 15:16:24.762485  269889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1202 15:16:24.762643  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:24.763526  269889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:16:24.772776  269889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:16:24.775906  269889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:16:24.776201  269889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:16:24.778396  269889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:16:24.781646  269889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:16:24.782453  269889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:16:24.784759  269889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:16:24.786977  269889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:16:24.787140  269889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:16:24.789595  269889 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1202 15:16:24.792526  269889 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1202 15:16:24.793663  269889 retry.go:31] will retry after 127.434668ms: ssh: handshake failed: EOF
	I1202 15:16:24.805137  269889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	W1202 15:16:24.809032  269889 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1202 15:16:24.809073  269889 retry.go:31] will retry after 184.320088ms: ssh: handshake failed: EOF
	I1202 15:16:24.809375  269889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:16:24.903804  269889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1202 15:16:24.919505  269889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1202 15:16:24.928234  269889 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1202 15:16:24.928260  269889 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1202 15:16:24.928472  269889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1202 15:16:24.956360  269889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1202 15:16:24.957613  269889 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1202 15:16:24.957636  269889 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1202 15:16:24.961344  269889 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1202 15:16:24.961450  269889 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1202 15:16:24.962989  269889 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1202 15:16:24.963011  269889 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1202 15:16:24.963526  269889 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1202 15:16:24.963541  269889 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1202 15:16:24.981465  269889 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1202 15:16:24.981490  269889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1202 15:16:24.986046  269889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1202 15:16:24.987012  269889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 15:16:24.989795  269889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1202 15:16:24.992160  269889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 15:16:24.993599  269889 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1202 15:16:24.993618  269889 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1202 15:16:25.017568  269889 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1202 15:16:25.017748  269889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1202 15:16:25.023369  269889 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1202 15:16:25.023466  269889 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1202 15:16:25.027931  269889 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1202 15:16:25.027959  269889 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1202 15:16:25.031229  269889 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1202 15:16:25.031255  269889 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1202 15:16:25.031635  269889 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1202 15:16:25.031658  269889 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1202 15:16:25.069168  269889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1202 15:16:25.073804  269889 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1202 15:16:25.073835  269889 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1202 15:16:25.081865  269889 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1202 15:16:25.081894  269889 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1202 15:16:25.085535  269889 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1202 15:16:25.085563  269889 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1202 15:16:25.097230  269889 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1202 15:16:25.097267  269889 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1202 15:16:25.125189  269889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1202 15:16:25.143020  269889 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1202 15:16:25.143050  269889 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1202 15:16:25.144755  269889 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1202 15:16:25.145725  269889 node_ready.go:35] waiting up to 6m0s for node "addons-141726" to be "Ready" ...
	I1202 15:16:25.146574  269889 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1202 15:16:25.146592  269889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1202 15:16:25.155968  269889 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1202 15:16:25.155993  269889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1202 15:16:25.175571  269889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1202 15:16:25.190020  269889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1202 15:16:25.192517  269889 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1202 15:16:25.192542  269889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1202 15:16:25.197291  269889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1202 15:16:25.199278  269889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1202 15:16:25.269075  269889 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1202 15:16:25.269125  269889 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1202 15:16:25.337656  269889 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1202 15:16:25.337679  269889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1202 15:16:25.399358  269889 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1202 15:16:25.399381  269889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1202 15:16:25.498968  269889 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1202 15:16:25.499015  269889 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1202 15:16:25.555502  269889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1202 15:16:25.669711  269889 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-141726" context rescaled to 1 replicas
	I1202 15:16:26.213877  269889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.310031659s)
	I1202 15:16:26.213917  269889 addons.go:495] Verifying addon ingress=true in "addons-141726"
	I1202 15:16:26.213964  269889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.294424411s)
	I1202 15:16:26.214079  269889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.285584172s)
	I1202 15:16:26.214137  269889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.257749343s)
	I1202 15:16:26.214213  269889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.228132327s)
	I1202 15:16:26.214480  269889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.227442853s)
	I1202 15:16:26.214511  269889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.222323209s)
	I1202 15:16:26.214540  269889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.224720282s)
	I1202 15:16:26.214588  269889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.145392858s)
	I1202 15:16:26.214600  269889 addons.go:495] Verifying addon registry=true in "addons-141726"
	I1202 15:16:26.214648  269889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.089429282s)
	I1202 15:16:26.214734  269889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.039081855s)
	I1202 15:16:26.214751  269889 addons.go:495] Verifying addon metrics-server=true in "addons-141726"
	I1202 15:16:26.214831  269889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.024775746s)
	I1202 15:16:26.214917  269889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.017596623s)
	I1202 15:16:26.215726  269889 out.go:179] * Verifying ingress addon...
	I1202 15:16:26.215746  269889 out.go:179] * Verifying registry addon...
	I1202 15:16:26.216695  269889 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-141726 service yakd-dashboard -n yakd-dashboard
	
	I1202 15:16:26.217861  269889 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1202 15:16:26.218548  269889 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1202 15:16:26.221282  269889 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1202 15:16:26.221302  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:26.221472  269889 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1202 15:16:26.221488  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1202 15:16:26.224879  269889 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1202 15:16:26.721204  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:26.725929  269889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.526605486s)
	W1202 15:16:26.725984  269889 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1202 15:16:26.726013  269889 retry.go:31] will retry after 187.482016ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1202 15:16:26.726150  269889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.170603124s)
	I1202 15:16:26.726186  269889 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-141726"
	I1202 15:16:26.728278  269889 out.go:179] * Verifying csi-hostpath-driver addon...
	I1202 15:16:26.729519  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:26.730329  269889 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1202 15:16:26.733586  269889 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1202 15:16:26.733612  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:26.914385  269889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1202 15:16:27.149166  269889 node_ready.go:57] node "addons-141726" has "Ready":"False" status (will retry)
	I1202 15:16:27.221536  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:27.221721  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:27.233106  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:27.721501  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:27.721649  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:27.733000  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:28.222154  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:28.222206  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:28.233829  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:28.721223  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:28.721228  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:28.734091  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1202 15:16:29.149319  269889 node_ready.go:57] node "addons-141726" has "Ready":"False" status (will retry)
	I1202 15:16:29.221841  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:29.221980  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:29.232876  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:29.391629  269889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.477191246s)
	I1202 15:16:29.721943  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:29.722232  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:29.733722  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:30.221673  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:30.221726  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:30.233247  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:30.721654  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:30.721816  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:30.733559  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:31.221704  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:31.221887  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:31.233194  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1202 15:16:31.648621  269889 node_ready.go:57] node "addons-141726" has "Ready":"False" status (will retry)
	I1202 15:16:31.721751  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:31.721772  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:31.733368  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:32.221225  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:32.221278  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:32.233759  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:32.240986  269889 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1202 15:16:32.241048  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:32.260637  269889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:16:32.367624  269889 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1202 15:16:32.380914  269889 addons.go:239] Setting addon gcp-auth=true in "addons-141726"
	I1202 15:16:32.380965  269889 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:16:32.381301  269889 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:16:32.400618  269889 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1202 15:16:32.400660  269889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:16:32.419542  269889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:16:32.517095  269889 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1202 15:16:32.518624  269889 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1202 15:16:32.519774  269889 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1202 15:16:32.519798  269889 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1202 15:16:32.533327  269889 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1202 15:16:32.533361  269889 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1202 15:16:32.547519  269889 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1202 15:16:32.547542  269889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1202 15:16:32.561455  269889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1202 15:16:32.721860  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:32.721955  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:32.733443  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:32.879827  269889 addons.go:495] Verifying addon gcp-auth=true in "addons-141726"
	I1202 15:16:32.881055  269889 out.go:179] * Verifying gcp-auth addon...
	I1202 15:16:32.883436  269889 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1202 15:16:32.885567  269889 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1202 15:16:32.885591  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:33.221662  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:33.221787  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:33.233288  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:33.387215  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1202 15:16:33.649298  269889 node_ready.go:57] node "addons-141726" has "Ready":"False" status (will retry)
	I1202 15:16:33.721300  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:33.721702  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:33.822511  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:33.887223  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:34.221075  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:34.221491  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:34.233147  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:34.387189  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:34.721245  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:34.721321  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:34.733819  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:34.886346  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:35.220959  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:35.221182  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:35.233741  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:35.386670  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:35.722301  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:35.722313  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:35.733734  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:35.886637  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1202 15:16:36.149574  269889 node_ready.go:57] node "addons-141726" has "Ready":"False" status (will retry)
	I1202 15:16:36.221221  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:36.221588  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:36.233718  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:36.387200  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:36.650809  269889 node_ready.go:49] node "addons-141726" is "Ready"
	I1202 15:16:36.650858  269889 node_ready.go:38] duration metric: took 11.505100033s for node "addons-141726" to be "Ready" ...
	I1202 15:16:36.650878  269889 api_server.go:52] waiting for apiserver process to appear ...
	I1202 15:16:36.650939  269889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 15:16:36.674104  269889 api_server.go:72] duration metric: took 12.095908422s to wait for apiserver process to appear ...
	I1202 15:16:36.674140  269889 api_server.go:88] waiting for apiserver healthz status ...
	I1202 15:16:36.674168  269889 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 15:16:36.679660  269889 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1202 15:16:36.680671  269889 api_server.go:141] control plane version: v1.34.2
	I1202 15:16:36.680704  269889 api_server.go:131] duration metric: took 6.556216ms to wait for apiserver health ...
	I1202 15:16:36.680717  269889 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 15:16:36.684671  269889 system_pods.go:59] 20 kube-system pods found
	I1202 15:16:36.684709  269889 system_pods.go:61] "amd-gpu-device-plugin-5f7fs" [f2b19fdb-b25c-4936-aabf-26c33a233e0e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1202 15:16:36.684718  269889 system_pods.go:61] "coredns-66bc5c9577-4lmgt" [d46c8b2e-ddd0-4a4a-8250-61aea385667d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 15:16:36.684727  269889 system_pods.go:61] "csi-hostpath-attacher-0" [d80978c0-9200-4dc6-95c1-d84a76eefd36] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1202 15:16:36.684731  269889 system_pods.go:61] "csi-hostpath-resizer-0" [f665549e-f00a-4974-8e84-a683f0595510] Pending
	I1202 15:16:36.684736  269889 system_pods.go:61] "csi-hostpathplugin-kdbl4" [4497fccc-9a9f-4e59-8bf0-4f3cbf2596ce] Pending
	I1202 15:16:36.684740  269889 system_pods.go:61] "etcd-addons-141726" [821f25a8-606b-4801-a713-bb19c4d70b79] Running
	I1202 15:16:36.684745  269889 system_pods.go:61] "kindnet-6j8vt" [e79cc485-44b5-4858-a017-56f335770ce1] Running
	I1202 15:16:36.684749  269889 system_pods.go:61] "kube-apiserver-addons-141726" [98baf8b8-7320-4686-8e29-6b3c5001bdce] Running
	I1202 15:16:36.684752  269889 system_pods.go:61] "kube-controller-manager-addons-141726" [458659e3-701d-4f8c-9443-36b8cd099bb9] Running
	I1202 15:16:36.684758  269889 system_pods.go:61] "kube-ingress-dns-minikube" [08c0ee33-a0d3-4db5-95a5-7c75138c80f6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1202 15:16:36.684767  269889 system_pods.go:61] "kube-proxy-ngfdv" [18e885be-e7eb-4886-9c44-06e4630025c2] Running
	I1202 15:16:36.684770  269889 system_pods.go:61] "kube-scheduler-addons-141726" [44d110b6-3c0f-443f-a8c8-f70b0d783e3a] Running
	I1202 15:16:36.684775  269889 system_pods.go:61] "metrics-server-85b7d694d7-fdkfv" [29527913-8c48-4e43-932a-d58b491cf15d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 15:16:36.684779  269889 system_pods.go:61] "nvidia-device-plugin-daemonset-gdvkl" [387a816e-abc8-433b-85b4-4c9d2df06ea3] Pending
	I1202 15:16:36.684784  269889 system_pods.go:61] "registry-6b586f9694-4ndqk" [ca026742-659d-47f4-80ef-ccc67046c4d3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1202 15:16:36.684789  269889 system_pods.go:61] "registry-creds-764b6fb674-pw2zl" [39dd35ee-37c3-4b6e-a06a-17ebd9a9bf35] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1202 15:16:36.684792  269889 system_pods.go:61] "registry-proxy-md75n" [73921430-8808-4e00-888a-b97d19bf02e5] Pending
	I1202 15:16:36.684798  269889 system_pods.go:61] "snapshot-controller-7d9fbc56b8-2svxc" [dc40710d-232f-4cfd-a136-e042fd8c9c4a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 15:16:36.684805  269889 system_pods.go:61] "snapshot-controller-7d9fbc56b8-bxzws" [260f4023-60ed-4220-b262-009dc06daa3d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 15:16:36.684809  269889 system_pods.go:61] "storage-provisioner" [eba14afe-432c-429d-8dda-5734280cc7ca] Pending
	I1202 15:16:36.684816  269889 system_pods.go:74] duration metric: took 4.092822ms to wait for pod list to return data ...
	I1202 15:16:36.684826  269889 default_sa.go:34] waiting for default service account to be created ...
	I1202 15:16:36.687169  269889 default_sa.go:45] found service account: "default"
	I1202 15:16:36.687201  269889 default_sa.go:55] duration metric: took 2.368431ms for default service account to be created ...
	I1202 15:16:36.687214  269889 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 15:16:36.690395  269889 system_pods.go:86] 20 kube-system pods found
	I1202 15:16:36.690448  269889 system_pods.go:89] "amd-gpu-device-plugin-5f7fs" [f2b19fdb-b25c-4936-aabf-26c33a233e0e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1202 15:16:36.690455  269889 system_pods.go:89] "coredns-66bc5c9577-4lmgt" [d46c8b2e-ddd0-4a4a-8250-61aea385667d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 15:16:36.690463  269889 system_pods.go:89] "csi-hostpath-attacher-0" [d80978c0-9200-4dc6-95c1-d84a76eefd36] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1202 15:16:36.690468  269889 system_pods.go:89] "csi-hostpath-resizer-0" [f665549e-f00a-4974-8e84-a683f0595510] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1202 15:16:36.690472  269889 system_pods.go:89] "csi-hostpathplugin-kdbl4" [4497fccc-9a9f-4e59-8bf0-4f3cbf2596ce] Pending
	I1202 15:16:36.690475  269889 system_pods.go:89] "etcd-addons-141726" [821f25a8-606b-4801-a713-bb19c4d70b79] Running
	I1202 15:16:36.690479  269889 system_pods.go:89] "kindnet-6j8vt" [e79cc485-44b5-4858-a017-56f335770ce1] Running
	I1202 15:16:36.690483  269889 system_pods.go:89] "kube-apiserver-addons-141726" [98baf8b8-7320-4686-8e29-6b3c5001bdce] Running
	I1202 15:16:36.690487  269889 system_pods.go:89] "kube-controller-manager-addons-141726" [458659e3-701d-4f8c-9443-36b8cd099bb9] Running
	I1202 15:16:36.690496  269889 system_pods.go:89] "kube-ingress-dns-minikube" [08c0ee33-a0d3-4db5-95a5-7c75138c80f6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1202 15:16:36.690501  269889 system_pods.go:89] "kube-proxy-ngfdv" [18e885be-e7eb-4886-9c44-06e4630025c2] Running
	I1202 15:16:36.690511  269889 system_pods.go:89] "kube-scheduler-addons-141726" [44d110b6-3c0f-443f-a8c8-f70b0d783e3a] Running
	I1202 15:16:36.690516  269889 system_pods.go:89] "metrics-server-85b7d694d7-fdkfv" [29527913-8c48-4e43-932a-d58b491cf15d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 15:16:36.690523  269889 system_pods.go:89] "nvidia-device-plugin-daemonset-gdvkl" [387a816e-abc8-433b-85b4-4c9d2df06ea3] Pending
	I1202 15:16:36.690529  269889 system_pods.go:89] "registry-6b586f9694-4ndqk" [ca026742-659d-47f4-80ef-ccc67046c4d3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1202 15:16:36.690536  269889 system_pods.go:89] "registry-creds-764b6fb674-pw2zl" [39dd35ee-37c3-4b6e-a06a-17ebd9a9bf35] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1202 15:16:36.690540  269889 system_pods.go:89] "registry-proxy-md75n" [73921430-8808-4e00-888a-b97d19bf02e5] Pending
	I1202 15:16:36.690550  269889 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2svxc" [dc40710d-232f-4cfd-a136-e042fd8c9c4a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 15:16:36.690562  269889 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bxzws" [260f4023-60ed-4220-b262-009dc06daa3d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 15:16:36.690568  269889 system_pods.go:89] "storage-provisioner" [eba14afe-432c-429d-8dda-5734280cc7ca] Pending
	I1202 15:16:36.690588  269889 retry.go:31] will retry after 280.343325ms: missing components: kube-dns
	I1202 15:16:36.721405  269889 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1202 15:16:36.721444  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:36.721459  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:36.733706  269889 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1202 15:16:36.733731  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:36.886994  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:36.988633  269889 system_pods.go:86] 20 kube-system pods found
	I1202 15:16:36.988668  269889 system_pods.go:89] "amd-gpu-device-plugin-5f7fs" [f2b19fdb-b25c-4936-aabf-26c33a233e0e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1202 15:16:36.988675  269889 system_pods.go:89] "coredns-66bc5c9577-4lmgt" [d46c8b2e-ddd0-4a4a-8250-61aea385667d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 15:16:36.988683  269889 system_pods.go:89] "csi-hostpath-attacher-0" [d80978c0-9200-4dc6-95c1-d84a76eefd36] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1202 15:16:36.988690  269889 system_pods.go:89] "csi-hostpath-resizer-0" [f665549e-f00a-4974-8e84-a683f0595510] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1202 15:16:36.988697  269889 system_pods.go:89] "csi-hostpathplugin-kdbl4" [4497fccc-9a9f-4e59-8bf0-4f3cbf2596ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1202 15:16:36.988701  269889 system_pods.go:89] "etcd-addons-141726" [821f25a8-606b-4801-a713-bb19c4d70b79] Running
	I1202 15:16:36.988707  269889 system_pods.go:89] "kindnet-6j8vt" [e79cc485-44b5-4858-a017-56f335770ce1] Running
	I1202 15:16:36.988730  269889 system_pods.go:89] "kube-apiserver-addons-141726" [98baf8b8-7320-4686-8e29-6b3c5001bdce] Running
	I1202 15:16:36.988735  269889 system_pods.go:89] "kube-controller-manager-addons-141726" [458659e3-701d-4f8c-9443-36b8cd099bb9] Running
	I1202 15:16:36.988740  269889 system_pods.go:89] "kube-ingress-dns-minikube" [08c0ee33-a0d3-4db5-95a5-7c75138c80f6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1202 15:16:36.988743  269889 system_pods.go:89] "kube-proxy-ngfdv" [18e885be-e7eb-4886-9c44-06e4630025c2] Running
	I1202 15:16:36.988747  269889 system_pods.go:89] "kube-scheduler-addons-141726" [44d110b6-3c0f-443f-a8c8-f70b0d783e3a] Running
	I1202 15:16:36.988752  269889 system_pods.go:89] "metrics-server-85b7d694d7-fdkfv" [29527913-8c48-4e43-932a-d58b491cf15d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 15:16:36.988758  269889 system_pods.go:89] "nvidia-device-plugin-daemonset-gdvkl" [387a816e-abc8-433b-85b4-4c9d2df06ea3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1202 15:16:36.988769  269889 system_pods.go:89] "registry-6b586f9694-4ndqk" [ca026742-659d-47f4-80ef-ccc67046c4d3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1202 15:16:36.988777  269889 system_pods.go:89] "registry-creds-764b6fb674-pw2zl" [39dd35ee-37c3-4b6e-a06a-17ebd9a9bf35] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1202 15:16:36.988785  269889 system_pods.go:89] "registry-proxy-md75n" [73921430-8808-4e00-888a-b97d19bf02e5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1202 15:16:36.988790  269889 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2svxc" [dc40710d-232f-4cfd-a136-e042fd8c9c4a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 15:16:36.988800  269889 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bxzws" [260f4023-60ed-4220-b262-009dc06daa3d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 15:16:36.988806  269889 system_pods.go:89] "storage-provisioner" [eba14afe-432c-429d-8dda-5734280cc7ca] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 15:16:36.988825  269889 retry.go:31] will retry after 323.861425ms: missing components: kube-dns
	I1202 15:16:37.222859  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:37.223145  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:37.235350  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:37.318072  269889 system_pods.go:86] 20 kube-system pods found
	I1202 15:16:37.318115  269889 system_pods.go:89] "amd-gpu-device-plugin-5f7fs" [f2b19fdb-b25c-4936-aabf-26c33a233e0e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1202 15:16:37.318128  269889 system_pods.go:89] "coredns-66bc5c9577-4lmgt" [d46c8b2e-ddd0-4a4a-8250-61aea385667d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 15:16:37.318140  269889 system_pods.go:89] "csi-hostpath-attacher-0" [d80978c0-9200-4dc6-95c1-d84a76eefd36] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1202 15:16:37.318149  269889 system_pods.go:89] "csi-hostpath-resizer-0" [f665549e-f00a-4974-8e84-a683f0595510] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1202 15:16:37.318158  269889 system_pods.go:89] "csi-hostpathplugin-kdbl4" [4497fccc-9a9f-4e59-8bf0-4f3cbf2596ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1202 15:16:37.318166  269889 system_pods.go:89] "etcd-addons-141726" [821f25a8-606b-4801-a713-bb19c4d70b79] Running
	I1202 15:16:37.318173  269889 system_pods.go:89] "kindnet-6j8vt" [e79cc485-44b5-4858-a017-56f335770ce1] Running
	I1202 15:16:37.318181  269889 system_pods.go:89] "kube-apiserver-addons-141726" [98baf8b8-7320-4686-8e29-6b3c5001bdce] Running
	I1202 15:16:37.318190  269889 system_pods.go:89] "kube-controller-manager-addons-141726" [458659e3-701d-4f8c-9443-36b8cd099bb9] Running
	I1202 15:16:37.318208  269889 system_pods.go:89] "kube-ingress-dns-minikube" [08c0ee33-a0d3-4db5-95a5-7c75138c80f6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1202 15:16:37.318213  269889 system_pods.go:89] "kube-proxy-ngfdv" [18e885be-e7eb-4886-9c44-06e4630025c2] Running
	I1202 15:16:37.318219  269889 system_pods.go:89] "kube-scheduler-addons-141726" [44d110b6-3c0f-443f-a8c8-f70b0d783e3a] Running
	I1202 15:16:37.318227  269889 system_pods.go:89] "metrics-server-85b7d694d7-fdkfv" [29527913-8c48-4e43-932a-d58b491cf15d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 15:16:37.318238  269889 system_pods.go:89] "nvidia-device-plugin-daemonset-gdvkl" [387a816e-abc8-433b-85b4-4c9d2df06ea3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1202 15:16:37.318247  269889 system_pods.go:89] "registry-6b586f9694-4ndqk" [ca026742-659d-47f4-80ef-ccc67046c4d3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1202 15:16:37.318256  269889 system_pods.go:89] "registry-creds-764b6fb674-pw2zl" [39dd35ee-37c3-4b6e-a06a-17ebd9a9bf35] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1202 15:16:37.318263  269889 system_pods.go:89] "registry-proxy-md75n" [73921430-8808-4e00-888a-b97d19bf02e5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1202 15:16:37.318272  269889 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2svxc" [dc40710d-232f-4cfd-a136-e042fd8c9c4a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 15:16:37.318281  269889 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bxzws" [260f4023-60ed-4220-b262-009dc06daa3d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 15:16:37.318288  269889 system_pods.go:89] "storage-provisioner" [eba14afe-432c-429d-8dda-5734280cc7ca] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 15:16:37.318309  269889 retry.go:31] will retry after 323.063008ms: missing components: kube-dns
	I1202 15:16:37.387316  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:37.646043  269889 system_pods.go:86] 20 kube-system pods found
	I1202 15:16:37.646079  269889 system_pods.go:89] "amd-gpu-device-plugin-5f7fs" [f2b19fdb-b25c-4936-aabf-26c33a233e0e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1202 15:16:37.646085  269889 system_pods.go:89] "coredns-66bc5c9577-4lmgt" [d46c8b2e-ddd0-4a4a-8250-61aea385667d] Running
	I1202 15:16:37.646093  269889 system_pods.go:89] "csi-hostpath-attacher-0" [d80978c0-9200-4dc6-95c1-d84a76eefd36] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1202 15:16:37.646099  269889 system_pods.go:89] "csi-hostpath-resizer-0" [f665549e-f00a-4974-8e84-a683f0595510] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1202 15:16:37.646108  269889 system_pods.go:89] "csi-hostpathplugin-kdbl4" [4497fccc-9a9f-4e59-8bf0-4f3cbf2596ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1202 15:16:37.646113  269889 system_pods.go:89] "etcd-addons-141726" [821f25a8-606b-4801-a713-bb19c4d70b79] Running
	I1202 15:16:37.646119  269889 system_pods.go:89] "kindnet-6j8vt" [e79cc485-44b5-4858-a017-56f335770ce1] Running
	I1202 15:16:37.646126  269889 system_pods.go:89] "kube-apiserver-addons-141726" [98baf8b8-7320-4686-8e29-6b3c5001bdce] Running
	I1202 15:16:37.646139  269889 system_pods.go:89] "kube-controller-manager-addons-141726" [458659e3-701d-4f8c-9443-36b8cd099bb9] Running
	I1202 15:16:37.646147  269889 system_pods.go:89] "kube-ingress-dns-minikube" [08c0ee33-a0d3-4db5-95a5-7c75138c80f6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1202 15:16:37.646157  269889 system_pods.go:89] "kube-proxy-ngfdv" [18e885be-e7eb-4886-9c44-06e4630025c2] Running
	I1202 15:16:37.646162  269889 system_pods.go:89] "kube-scheduler-addons-141726" [44d110b6-3c0f-443f-a8c8-f70b0d783e3a] Running
	I1202 15:16:37.646170  269889 system_pods.go:89] "metrics-server-85b7d694d7-fdkfv" [29527913-8c48-4e43-932a-d58b491cf15d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 15:16:37.646176  269889 system_pods.go:89] "nvidia-device-plugin-daemonset-gdvkl" [387a816e-abc8-433b-85b4-4c9d2df06ea3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1202 15:16:37.646185  269889 system_pods.go:89] "registry-6b586f9694-4ndqk" [ca026742-659d-47f4-80ef-ccc67046c4d3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1202 15:16:37.646191  269889 system_pods.go:89] "registry-creds-764b6fb674-pw2zl" [39dd35ee-37c3-4b6e-a06a-17ebd9a9bf35] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1202 15:16:37.646200  269889 system_pods.go:89] "registry-proxy-md75n" [73921430-8808-4e00-888a-b97d19bf02e5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1202 15:16:37.646205  269889 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2svxc" [dc40710d-232f-4cfd-a136-e042fd8c9c4a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 15:16:37.646214  269889 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bxzws" [260f4023-60ed-4220-b262-009dc06daa3d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 15:16:37.646218  269889 system_pods.go:89] "storage-provisioner" [eba14afe-432c-429d-8dda-5734280cc7ca] Running
	I1202 15:16:37.646229  269889 system_pods.go:126] duration metric: took 959.007402ms to wait for k8s-apps to be running ...
	I1202 15:16:37.646244  269889 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 15:16:37.646296  269889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 15:16:37.661262  269889 system_svc.go:56] duration metric: took 15.005265ms WaitForService to wait for kubelet
	I1202 15:16:37.661302  269889 kubeadm.go:587] duration metric: took 13.083110952s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 15:16:37.661325  269889 node_conditions.go:102] verifying NodePressure condition ...
	I1202 15:16:37.664696  269889 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 15:16:37.664735  269889 node_conditions.go:123] node cpu capacity is 8
	I1202 15:16:37.664811  269889 node_conditions.go:105] duration metric: took 3.477691ms to run NodePressure ...
	I1202 15:16:37.664826  269889 start.go:242] waiting for startup goroutines ...
	I1202 15:16:37.721659  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:37.721730  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:37.733896  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:37.887290  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:38.221619  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:38.221640  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:38.233456  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:38.387159  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:38.721833  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:38.721909  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:38.733671  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:38.886363  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:39.221711  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:39.221803  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:39.233148  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:39.387995  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:39.721323  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:39.721509  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:39.733213  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:39.887127  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:40.221795  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:40.221862  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:40.233737  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:40.386360  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:40.722203  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:40.722351  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:40.734023  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:40.886739  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:41.222246  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:41.223678  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:41.234238  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:41.388516  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:41.721271  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:41.721917  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:41.735070  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:41.887750  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:42.222378  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:42.222586  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:42.234182  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:42.387228  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:42.721824  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:42.721856  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:42.733853  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:42.887024  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:43.222006  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:43.222007  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:43.233816  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:43.387119  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:43.721612  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:43.721647  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:43.733876  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:43.887245  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:44.221830  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:44.221863  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:44.234659  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:44.388292  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:44.722234  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:44.722379  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:44.734614  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:44.886136  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:45.221548  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:45.221601  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:45.233211  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:45.387333  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:45.722028  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:45.722315  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:45.733793  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:45.886906  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:46.220887  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:46.221607  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:46.233467  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:46.387273  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:46.721639  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:46.721696  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:46.734000  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:46.887394  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:47.221900  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:47.221994  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:47.233370  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:47.387157  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:47.721634  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:47.721757  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:47.733478  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:47.887190  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:48.220954  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:48.221014  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:48.234089  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:48.386666  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:48.721649  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:48.721673  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:48.733157  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:48.887201  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:49.221859  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:49.221965  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:49.234114  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:49.387039  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:49.721588  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:49.721778  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:49.734024  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:49.887239  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:50.222164  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:50.222254  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:50.234536  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:50.387701  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:50.721124  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:50.721660  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:50.734265  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:50.887192  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:51.221915  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:51.221955  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:51.233631  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:51.387583  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:51.721854  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:51.721874  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:51.733719  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:51.886713  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:52.221878  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:52.221957  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:52.234081  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:52.387486  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:52.721592  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:52.721764  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:52.732755  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:52.886170  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:53.221381  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:53.221749  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:53.233037  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:53.386815  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:53.721389  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:53.721560  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:53.734132  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:53.887006  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:54.221412  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:54.221877  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:54.234647  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:54.387583  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:54.722628  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:54.722661  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:54.733301  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:54.887852  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:55.222543  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:55.222591  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:55.233758  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:55.386710  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:55.721295  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:55.721637  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:55.736006  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:55.886881  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:56.221598  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:56.221730  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:56.233230  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:56.387215  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:56.721827  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:56.721852  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:56.733367  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:56.887082  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:57.221242  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:57.221546  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:57.233385  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:57.387194  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:57.721314  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:57.721386  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:57.734404  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:57.887666  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:58.221910  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:58.221954  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:58.234133  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:58.387052  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:58.724110  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:58.724195  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:58.735493  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:58.887788  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:59.223077  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:59.223121  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:59.233978  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:59.387165  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:16:59.721593  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:16:59.721657  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:16:59.733898  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:16:59.888693  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:00.221312  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:00.221376  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:17:00.234331  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:00.387357  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:00.895830  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:17:00.895845  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:00.895875  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:00.896132  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:01.243677  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:17:01.244019  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:01.244073  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:01.386940  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:01.721228  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:01.721919  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:17:01.733893  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:01.887255  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:02.221966  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:17:02.222145  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:02.234017  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:02.386504  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:02.722154  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:17:02.722194  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:02.734245  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:02.887274  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:03.221692  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:17:03.221859  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:03.233711  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:03.386527  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:03.721481  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:17:03.721537  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:03.733097  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:03.886835  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:04.220860  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:04.221547  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:17:04.233642  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:04.387249  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:04.721466  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:17:04.721499  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:04.734494  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:04.887655  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:05.221955  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:17:05.222142  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:05.234091  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:05.387461  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:05.722258  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:17:05.722618  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:05.734078  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:05.887012  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:06.220870  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:06.221568  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:17:06.233596  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:06.388161  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:06.721792  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:17:06.721800  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:06.733687  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:06.888296  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:07.221389  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:07.221775  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 15:17:07.234300  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:07.387591  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:07.721565  269889 kapi.go:107] duration metric: took 41.503014015s to wait for kubernetes.io/minikube-addons=registry ...
	I1202 15:17:07.721606  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:07.733328  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:07.887404  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:08.221956  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:08.233742  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:08.386939  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:08.723660  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:08.735635  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:08.888088  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:09.221553  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:09.233688  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:09.388253  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:09.721473  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:09.734254  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:09.887281  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:10.221626  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:10.233715  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:10.387160  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:10.721349  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:10.734442  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:10.888167  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:11.228171  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:11.235833  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:11.390164  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:11.721793  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:11.734570  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:11.887263  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:12.221464  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:12.234733  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:12.386917  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:12.721469  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:12.734267  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:12.929081  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:13.221617  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:13.233559  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:13.388049  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:13.721277  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:13.734438  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:13.887618  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:14.222338  269889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 15:17:14.233948  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:14.387253  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:14.722294  269889 kapi.go:107] duration metric: took 48.504428753s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1202 15:17:14.734696  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:14.886209  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:15.234128  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:15.387266  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:15.766697  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:15.886264  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:16.233783  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:16.387447  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:16.734261  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:16.886858  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:17.234848  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:17.386799  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 15:17:17.736217  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:17.887362  269889 kapi.go:107] duration metric: took 45.003939373s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1202 15:17:17.947223  269889 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-141726 cluster.
	I1202 15:17:18.084255  269889 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1202 15:17:18.094664  269889 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1202 15:17:18.234681  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:18.735177  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:19.233708  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:19.735072  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:20.233876  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:20.734236  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:21.234820  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:21.734747  269889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 15:17:22.234472  269889 kapi.go:107] duration metric: took 55.504138764s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1202 15:17:22.236368  269889 out.go:179] * Enabled addons: registry-creds, inspektor-gadget, amd-gpu-device-plugin, storage-provisioner, cloud-spanner, nvidia-device-plugin, metrics-server, ingress-dns, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1202 15:17:22.237575  269889 addons.go:530] duration metric: took 57.659360679s for enable addons: enabled=[registry-creds inspektor-gadget amd-gpu-device-plugin storage-provisioner cloud-spanner nvidia-device-plugin metrics-server ingress-dns yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1202 15:17:22.237618  269889 start.go:247] waiting for cluster config update ...
	I1202 15:17:22.237638  269889 start.go:256] writing updated cluster config ...
	I1202 15:17:22.237893  269889 ssh_runner.go:195] Run: rm -f paused
	I1202 15:17:22.241864  269889 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 15:17:22.245057  269889 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4lmgt" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 15:17:22.248877  269889 pod_ready.go:94] pod "coredns-66bc5c9577-4lmgt" is "Ready"
	I1202 15:17:22.248896  269889 pod_ready.go:86] duration metric: took 3.820816ms for pod "coredns-66bc5c9577-4lmgt" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 15:17:22.250575  269889 pod_ready.go:83] waiting for pod "etcd-addons-141726" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 15:17:22.253853  269889 pod_ready.go:94] pod "etcd-addons-141726" is "Ready"
	I1202 15:17:22.253872  269889 pod_ready.go:86] duration metric: took 3.279844ms for pod "etcd-addons-141726" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 15:17:22.255624  269889 pod_ready.go:83] waiting for pod "kube-apiserver-addons-141726" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 15:17:22.258858  269889 pod_ready.go:94] pod "kube-apiserver-addons-141726" is "Ready"
	I1202 15:17:22.258877  269889 pod_ready.go:86] duration metric: took 3.236011ms for pod "kube-apiserver-addons-141726" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 15:17:22.260514  269889 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-141726" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 15:17:22.645772  269889 pod_ready.go:94] pod "kube-controller-manager-addons-141726" is "Ready"
	I1202 15:17:22.645802  269889 pod_ready.go:86] duration metric: took 385.272457ms for pod "kube-controller-manager-addons-141726" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 15:17:22.846309  269889 pod_ready.go:83] waiting for pod "kube-proxy-ngfdv" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 15:17:23.246326  269889 pod_ready.go:94] pod "kube-proxy-ngfdv" is "Ready"
	I1202 15:17:23.246355  269889 pod_ready.go:86] duration metric: took 400.021885ms for pod "kube-proxy-ngfdv" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 15:17:23.446666  269889 pod_ready.go:83] waiting for pod "kube-scheduler-addons-141726" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 15:17:23.846203  269889 pod_ready.go:94] pod "kube-scheduler-addons-141726" is "Ready"
	I1202 15:17:23.846240  269889 pod_ready.go:86] duration metric: took 399.546779ms for pod "kube-scheduler-addons-141726" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 15:17:23.846257  269889 pod_ready.go:40] duration metric: took 1.604360055s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 15:17:23.892799  269889 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1202 15:17:23.974654  269889 out.go:179] * Done! kubectl is now configured to use "addons-141726" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 02 15:17:20 addons-141726 crio[777]: time="2025-12-02T15:17:20.907827444Z" level=info msg="Starting container: 5412cbcb9dad23c931f90a92d80b1e256500b274b20fcff6807ec93ce486087b" id=57d089fc-5598-4698-b0dd-0bd5206ee40e name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 15:17:20 addons-141726 crio[777]: time="2025-12-02T15:17:20.91070479Z" level=info msg="Started container" PID=6079 containerID=5412cbcb9dad23c931f90a92d80b1e256500b274b20fcff6807ec93ce486087b description=kube-system/csi-hostpathplugin-kdbl4/csi-snapshotter id=57d089fc-5598-4698-b0dd-0bd5206ee40e name=/runtime.v1.RuntimeService/StartContainer sandboxID=24730fde387819aec0a7019084201f5c267f0dda8ac9dc83faad9432faa7dc34
	Dec 02 15:17:24 addons-141726 crio[777]: time="2025-12-02T15:17:24.926758633Z" level=info msg="Running pod sandbox: default/busybox/POD" id=625b017b-606a-4665-af05-701876c80f0c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 15:17:24 addons-141726 crio[777]: time="2025-12-02T15:17:24.926840962Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 15:17:24 addons-141726 crio[777]: time="2025-12-02T15:17:24.934139919Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:e3aeb7956ff3b1bf11c99cee8af26ab3597680be97e50530278b524e3b6aad57 UID:68f402eb-f188-423f-828c-892475faf6db NetNS:/var/run/netns/5aeaa71e-20d0-4cce-9cda-031af4549613 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000eb6250}] Aliases:map[]}"
	Dec 02 15:17:24 addons-141726 crio[777]: time="2025-12-02T15:17:24.934181043Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 02 15:17:24 addons-141726 crio[777]: time="2025-12-02T15:17:24.944908142Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:e3aeb7956ff3b1bf11c99cee8af26ab3597680be97e50530278b524e3b6aad57 UID:68f402eb-f188-423f-828c-892475faf6db NetNS:/var/run/netns/5aeaa71e-20d0-4cce-9cda-031af4549613 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000eb6250}] Aliases:map[]}"
	Dec 02 15:17:24 addons-141726 crio[777]: time="2025-12-02T15:17:24.945039813Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 02 15:17:24 addons-141726 crio[777]: time="2025-12-02T15:17:24.945974505Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 02 15:17:24 addons-141726 crio[777]: time="2025-12-02T15:17:24.946943061Z" level=info msg="Ran pod sandbox e3aeb7956ff3b1bf11c99cee8af26ab3597680be97e50530278b524e3b6aad57 with infra container: default/busybox/POD" id=625b017b-606a-4665-af05-701876c80f0c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 15:17:24 addons-141726 crio[777]: time="2025-12-02T15:17:24.948144813Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d8ff02bc-8582-4c42-b72f-c76ee13e6e6f name=/runtime.v1.ImageService/ImageStatus
	Dec 02 15:17:24 addons-141726 crio[777]: time="2025-12-02T15:17:24.948292754Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=d8ff02bc-8582-4c42-b72f-c76ee13e6e6f name=/runtime.v1.ImageService/ImageStatus
	Dec 02 15:17:24 addons-141726 crio[777]: time="2025-12-02T15:17:24.948336532Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=d8ff02bc-8582-4c42-b72f-c76ee13e6e6f name=/runtime.v1.ImageService/ImageStatus
	Dec 02 15:17:24 addons-141726 crio[777]: time="2025-12-02T15:17:24.948920621Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6bfbb93b-da4e-4702-8d67-34efc85fd5a3 name=/runtime.v1.ImageService/PullImage
	Dec 02 15:17:24 addons-141726 crio[777]: time="2025-12-02T15:17:24.950269295Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 02 15:17:26 addons-141726 crio[777]: time="2025-12-02T15:17:26.900562089Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=6bfbb93b-da4e-4702-8d67-34efc85fd5a3 name=/runtime.v1.ImageService/PullImage
	Dec 02 15:17:26 addons-141726 crio[777]: time="2025-12-02T15:17:26.901231599Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ab89db76-6e89-4870-bd2c-73de28a582bb name=/runtime.v1.ImageService/ImageStatus
	Dec 02 15:17:26 addons-141726 crio[777]: time="2025-12-02T15:17:26.902668765Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=dec81daf-c841-453a-ad43-8f79ebae73f0 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 15:17:26 addons-141726 crio[777]: time="2025-12-02T15:17:26.906416299Z" level=info msg="Creating container: default/busybox/busybox" id=7c86008c-6374-43af-af4a-dc015fbdfa76 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 15:17:26 addons-141726 crio[777]: time="2025-12-02T15:17:26.906564034Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 15:17:26 addons-141726 crio[777]: time="2025-12-02T15:17:26.911956302Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 15:17:26 addons-141726 crio[777]: time="2025-12-02T15:17:26.912377269Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 15:17:26 addons-141726 crio[777]: time="2025-12-02T15:17:26.952223404Z" level=info msg="Created container c50e4749cdb9918eb9dda40c62a3cb5527c4ca58db34591eac688f13c46c40a8: default/busybox/busybox" id=7c86008c-6374-43af-af4a-dc015fbdfa76 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 15:17:26 addons-141726 crio[777]: time="2025-12-02T15:17:26.952959015Z" level=info msg="Starting container: c50e4749cdb9918eb9dda40c62a3cb5527c4ca58db34591eac688f13c46c40a8" id=6757f300-2f51-439a-8cd9-4575f2dce4c8 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 15:17:26 addons-141726 crio[777]: time="2025-12-02T15:17:26.955297007Z" level=info msg="Started container" PID=6206 containerID=c50e4749cdb9918eb9dda40c62a3cb5527c4ca58db34591eac688f13c46c40a8 description=default/busybox/busybox id=6757f300-2f51-439a-8cd9-4575f2dce4c8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e3aeb7956ff3b1bf11c99cee8af26ab3597680be97e50530278b524e3b6aad57
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	c50e4749cdb99       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          9 seconds ago        Running             busybox                                  0                   e3aeb7956ff3b       busybox                                    default
	5412cbcb9dad2       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          15 seconds ago       Running             csi-snapshotter                          0                   24730fde38781       csi-hostpathplugin-kdbl4                   kube-system
	584adcb3687a9       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          16 seconds ago       Running             csi-provisioner                          0                   24730fde38781       csi-hostpathplugin-kdbl4                   kube-system
	00fe2f1035e36       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            17 seconds ago       Running             liveness-probe                           0                   24730fde38781       csi-hostpathplugin-kdbl4                   kube-system
	c829c27b2be0c       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           18 seconds ago       Running             hostpath                                 0                   24730fde38781       csi-hostpathplugin-kdbl4                   kube-system
	fadc45d9931b2       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 19 seconds ago       Running             gcp-auth                                 0                   0b0b94d99d0f9       gcp-auth-78565c9fb4-v79fk                  gcp-auth
	3307ae3898cde       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                21 seconds ago       Running             node-driver-registrar                    0                   24730fde38781       csi-hostpathplugin-kdbl4                   kube-system
	30003c9aa47bb       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             22 seconds ago       Running             controller                               0                   fd38856ee9de5       ingress-nginx-controller-6c8bf45fb-hqxvp   ingress-nginx
	ee7da34f110ee       884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45                                                                             25 seconds ago       Exited              patch                                    2                   e67980403726c       gcp-auth-certs-patch-tblv4                 gcp-auth
	9d97c1bc8794d       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            25 seconds ago       Running             gadget                                   0                   703d3da8f1e55       gadget-sbzvc                               gadget
	874bcd460b4cc       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              28 seconds ago       Running             registry-proxy                           0                   491c5508c92ba       registry-proxy-md75n                       kube-system
	a087fdb2d51ae       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     31 seconds ago       Running             amd-gpu-device-plugin                    0                   86a4d5675b7e2       amd-gpu-device-plugin-5f7fs                kube-system
	44579fbc5bf76       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             33 seconds ago       Running             csi-attacher                             0                   1172f295b5034       csi-hostpath-attacher-0                    kube-system
	398f34ffd447f       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              34 seconds ago       Running             csi-resizer                              0                   6cf09c0fa449f       csi-hostpath-resizer-0                     kube-system
	51d82f122a468       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   34 seconds ago       Running             csi-external-health-monitor-controller   0                   24730fde38781       csi-hostpathplugin-kdbl4                   kube-system
	3334bfe46f760       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   36 seconds ago       Exited              create                                   0                   1587f43002856       gcp-auth-certs-create-znxkb                gcp-auth
	1433ec009789d       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     36 seconds ago       Running             nvidia-device-plugin-ctr                 0                   ea9beb79218ef       nvidia-device-plugin-daemonset-gdvkl       kube-system
	f7167c650b5e1       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      40 seconds ago       Running             volume-snapshot-controller               0                   36124a258c8cd       snapshot-controller-7d9fbc56b8-2svxc       kube-system
	d810aa1a8b1b9       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              40 seconds ago       Running             yakd                                     0                   a0510f33396f5       yakd-dashboard-5ff678cb9-xl2rh             yakd-dashboard
	ba208fa7fba7a       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      43 seconds ago       Running             volume-snapshot-controller               0                   580968b1bd643       snapshot-controller-7d9fbc56b8-bxzws       kube-system
	747643e95b13b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   44 seconds ago       Exited              patch                                    0                   a86be7d979883       ingress-nginx-admission-patch-dz5cl        ingress-nginx
	f253449a67080       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   45 seconds ago       Exited              create                                   0                   fd676a9deec95       ingress-nginx-admission-create-xjhbl       ingress-nginx
	b19e466fce39f       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               45 seconds ago       Running             cloud-spanner-emulator                   0                   e16aa8865ae21       cloud-spanner-emulator-5bdddb765-rjbxm     default
	515f2711c2508       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           49 seconds ago       Running             registry                                 0                   b20e2fdb67c3a       registry-6b586f9694-4ndqk                  kube-system
	db056cf136978       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               51 seconds ago       Running             minikube-ingress-dns                     0                   df085172f9221       kube-ingress-dns-minikube                  kube-system
	1925e5f023bf3       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             56 seconds ago       Running             local-path-provisioner                   0                   7aec947ab3b54       local-path-provisioner-648f6765c9-9gbt7    local-path-storage
	8aef252bd37ce       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        58 seconds ago       Running             metrics-server                           0                   09eef7b6ef683       metrics-server-85b7d694d7-fdkfv            kube-system
	ed6c258d3fc96       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             59 seconds ago       Running             storage-provisioner                      0                   bdc14e1057501       storage-provisioner                        kube-system
	62cac40636a5b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             59 seconds ago       Running             coredns                                  0                   aa3a6c671fa2e       coredns-66bc5c9577-4lmgt                   kube-system
	0d87471c625dc       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                                             About a minute ago   Running             kube-proxy                               0                   bcf53154c86ac       kube-proxy-ngfdv                           kube-system
	ad0c6b77e41e3       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             About a minute ago   Running             kindnet-cni                              0                   07058c03db816       kindnet-6j8vt                              kube-system
	8ae5e65fa7abb       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                                             About a minute ago   Running             kube-controller-manager                  0                   52f09c08cc438       kube-controller-manager-addons-141726      kube-system
	d4ee4d2470fd1       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                                             About a minute ago   Running             kube-apiserver                           0                   67faec2250512       kube-apiserver-addons-141726               kube-system
	762c736ec2bae       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             About a minute ago   Running             etcd                                     0                   4edd88401748c       etcd-addons-141726                         kube-system
	2ad3385ae6c40       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                                             About a minute ago   Running             kube-scheduler                           0                   76720f8880cd7       kube-scheduler-addons-141726               kube-system
	
	
	==> coredns [62cac40636a5b133526a2d722e6709f00017736dd1cfc7e3133b26af70363e46] <==
	[INFO] 10.244.0.17:41925 - 16554 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00024605s
	[INFO] 10.244.0.17:55937 - 57330 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000117379s
	[INFO] 10.244.0.17:55937 - 57044 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000156233s
	[INFO] 10.244.0.17:34936 - 58825 "AAAA IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.00007571s
	[INFO] 10.244.0.17:34936 - 58527 "A IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000141257s
	[INFO] 10.244.0.17:34877 - 13250 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000088992s
	[INFO] 10.244.0.17:34877 - 12935 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000156194s
	[INFO] 10.244.0.17:40501 - 53085 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000062635s
	[INFO] 10.244.0.17:40501 - 52816 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000095325s
	[INFO] 10.244.0.17:59827 - 32354 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000140612s
	[INFO] 10.244.0.17:59827 - 32565 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000158639s
	[INFO] 10.244.0.22:47534 - 52938 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000207686s
	[INFO] 10.244.0.22:58741 - 27013 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000297613s
	[INFO] 10.244.0.22:35742 - 61388 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000097889s
	[INFO] 10.244.0.22:40931 - 4852 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00013861s
	[INFO] 10.244.0.22:50885 - 27242 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000116006s
	[INFO] 10.244.0.22:57739 - 21531 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000127882s
	[INFO] 10.244.0.22:41847 - 43397 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.006989561s
	[INFO] 10.244.0.22:53632 - 31670 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.012419723s
	[INFO] 10.244.0.22:47138 - 2599 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005045187s
	[INFO] 10.244.0.22:57153 - 62265 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005427359s
	[INFO] 10.244.0.22:40503 - 40213 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004081528s
	[INFO] 10.244.0.22:42664 - 6280 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004944072s
	[INFO] 10.244.0.22:46145 - 6922 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001105682s
	[INFO] 10.244.0.22:50276 - 19793 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 268 0.002220391s
	
	
	==> describe nodes <==
	Name:               addons-141726
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-141726
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=addons-141726
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T15_16_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-141726
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-141726"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 15:16:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-141726
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 15:17:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 15:17:31 +0000   Tue, 02 Dec 2025 15:16:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 15:17:31 +0000   Tue, 02 Dec 2025 15:16:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 15:17:31 +0000   Tue, 02 Dec 2025 15:16:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 15:17:31 +0000   Tue, 02 Dec 2025 15:16:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-141726
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                a9e18f89-559e-4220-a5c7-14350d2ece01
	  Boot ID:                    e00bac56-b076-4861-bc22-5d3b11269f73
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  default                     cloud-spanner-emulator-5bdddb765-rjbxm      0 (0%)        0 (0%)      0 (0%)           0 (0%)         71s
	  gadget                      gadget-sbzvc                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         71s
	  gcp-auth                    gcp-auth-78565c9fb4-v79fk                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         64s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-hqxvp    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         70s
	  kube-system                 amd-gpu-device-plugin-5f7fs                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 coredns-66bc5c9577-4lmgt                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     71s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 csi-hostpathplugin-kdbl4                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 etcd-addons-141726                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         77s
	  kube-system                 kindnet-6j8vt                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      71s
	  kube-system                 kube-apiserver-addons-141726                250m (3%)     0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-controller-manager-addons-141726       200m (2%)     0 (0%)      0 (0%)           0 (0%)         77s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 kube-proxy-ngfdv                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 kube-scheduler-addons-141726                100m (1%)     0 (0%)      0 (0%)           0 (0%)         77s
	  kube-system                 metrics-server-85b7d694d7-fdkfv             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         71s
	  kube-system                 nvidia-device-plugin-daemonset-gdvkl        0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 registry-6b586f9694-4ndqk                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 registry-creds-764b6fb674-pw2zl             0 (0%)        0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 registry-proxy-md75n                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 snapshot-controller-7d9fbc56b8-2svxc        0 (0%)        0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 snapshot-controller-7d9fbc56b8-bxzws        0 (0%)        0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         71s
	  local-path-storage          local-path-provisioner-648f6765c9-9gbt7     0 (0%)        0 (0%)      0 (0%)           0 (0%)         71s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-xl2rh              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     70s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 70s   kube-proxy       
	  Normal  Starting                 77s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  77s   kubelet          Node addons-141726 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    77s   kubelet          Node addons-141726 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     77s   kubelet          Node addons-141726 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           72s   node-controller  Node addons-141726 event: Registered Node addons-141726 in Controller
	  Normal  NodeReady                60s   kubelet          Node addons-141726 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 02 04 38 5c 36 08 06
	[Dec 2 15:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3e 0a 0c 29 bf c8 08 06
	[  +0.752933] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e c8 88 91 09 5d 08 06
	[  +0.060723] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 92 5a 34 72 2c 08 06
	[  +4.667255] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ce 10 2a e0 d6 12 08 06
	[ +29.865451] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 16 1c 5d b6 2f d2 08 06
	[  +0.779472] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ba 5c ea 06 07 79 08 06
	[  +0.037703] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fa 53 6a dc 20 d1 08 06
	[  +4.345686] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 43 17 1d f4 23 08 06
	[Dec 2 15:05] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 9a 2e f0 83 a3 28 08 06
	[  +0.873992] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 43 6f a6 8c a3 08 06
	[  +0.047493] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 9b c8 59 55 e7 08 06
	[  +4.389247] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 07 ad 09 99 ea 08 06
	
	
	==> etcd [762c736ec2bae28c970f9e38d7f5c0753e1d54378a7fa586b85636f18b0e547e] <==
	{"level":"warn","ts":"2025-12-02T15:16:16.600276Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:16.614848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:16.627281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:16.633831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:16.641212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:16.657511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:16.663645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:16.669922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:16.720657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:27.141314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:27.148820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:52.651512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:52.658184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:52.672377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:52.678895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:17:00.892826Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"172.415932ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-02T15:17:00.892923Z","caller":"traceutil/trace.go:172","msg":"trace[990663354] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1051; }","duration":"172.526409ms","start":"2025-12-02T15:17:00.720384Z","end":"2025-12-02T15:17:00.892911Z","steps":["trace[990663354] 'agreement among raft nodes before linearized reading'  (duration: 34.533196ms)","trace[990663354] 'range keys from in-memory index tree'  (duration: 137.85094ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T15:17:00.892873Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"172.441955ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-02T15:17:00.893031Z","caller":"traceutil/trace.go:172","msg":"trace[1250853938] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1051; }","duration":"172.622077ms","start":"2025-12-02T15:17:00.720396Z","end":"2025-12-02T15:17:00.893018Z","steps":["trace[1250853938] 'agreement among raft nodes before linearized reading'  (duration: 34.524535ms)","trace[1250853938] 'range keys from in-memory index tree'  (duration: 137.887411ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T15:17:00.893360Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"137.882048ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128041712795482208 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/nvidia-device-plugin-daemonset-gdvkl\" mod_revision:868 > success:<request_put:<key:\"/registry/pods/kube-system/nvidia-device-plugin-daemonset-gdvkl\" value_size:4441 >> failure:<request_range:<key:\"/registry/pods/kube-system/nvidia-device-plugin-daemonset-gdvkl\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-02T15:17:00.893437Z","caller":"traceutil/trace.go:172","msg":"trace[984492781] linearizableReadLoop","detail":"{readStateIndex:1077; appliedIndex:1076; }","duration":"138.534027ms","start":"2025-12-02T15:17:00.754878Z","end":"2025-12-02T15:17:00.893412Z","steps":["trace[984492781] 'read index received'  (duration: 24.828µs)","trace[984492781] 'applied index is now lower than readState.Index'  (duration: 138.508682ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-02T15:17:00.893474Z","caller":"traceutil/trace.go:172","msg":"trace[877079946] transaction","detail":"{read_only:false; response_revision:1052; number_of_response:1; }","duration":"264.022269ms","start":"2025-12-02T15:17:00.629434Z","end":"2025-12-02T15:17:00.893457Z","steps":["trace[877079946] 'process raft request'  (duration: 125.544312ms)","trace[877079946] 'compare'  (duration: 137.793061ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T15:17:00.893503Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"161.036312ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-02T15:17:00.893543Z","caller":"traceutil/trace.go:172","msg":"trace[951369430] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1052; }","duration":"161.077375ms","start":"2025-12-02T15:17:00.732457Z","end":"2025-12-02T15:17:00.893534Z","steps":["trace[951369430] 'agreement among raft nodes before linearized reading'  (duration: 161.011697ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T15:17:01.172193Z","caller":"traceutil/trace.go:172","msg":"trace[1342566136] transaction","detail":"{read_only:false; response_revision:1054; number_of_response:1; }","duration":"271.150061ms","start":"2025-12-02T15:17:00.901025Z","end":"2025-12-02T15:17:01.172175Z","steps":["trace[1342566136] 'process raft request'  (duration: 242.038794ms)","trace[1342566136] 'compare'  (duration: 29.005164ms)"],"step_count":2}
	
	
	==> gcp-auth [fadc45d9931b2c9a66e4bdd265caa4ccd0769d6687d165834032c549cc4b8fa4] <==
	2025/12/02 15:17:17 GCP Auth Webhook started!
	2025/12/02 15:17:24 Ready to marshal response ...
	2025/12/02 15:17:24 Ready to write response ...
	2025/12/02 15:17:24 Ready to marshal response ...
	2025/12/02 15:17:24 Ready to write response ...
	2025/12/02 15:17:24 Ready to marshal response ...
	2025/12/02 15:17:24 Ready to write response ...
	
	
	==> kernel <==
	 15:17:36 up  1:59,  0 user,  load average: 2.52, 1.68, 1.46
	Linux addons-141726 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ad0c6b77e41e3a16f35df40501b8e69476519899dc85d64c8b2cf07c30b31ce4] <==
	I1202 15:16:26.097905       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 15:16:26.100772       1 controller.go:381] "Waiting for informer caches to sync"
	E1202 15:16:26.140856       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1202 15:16:26.141020       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1202 15:16:26.141168       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1202 15:16:26.192861       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 15:16:26.195823       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1202 15:16:26.197256       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1202 15:16:27.496261       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 15:16:27.496291       1 metrics.go:72] Registering metrics
	I1202 15:16:27.496353       1 controller.go:711] "Syncing nftables rules"
	I1202 15:16:36.097631       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:16:36.097684       1 main.go:301] handling current node
	I1202 15:16:46.097551       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:16:46.097596       1 main.go:301] handling current node
	I1202 15:16:56.098076       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:16:56.098116       1 main.go:301] handling current node
	I1202 15:17:06.097543       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:17:06.097604       1 main.go:301] handling current node
	I1202 15:17:16.097380       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:17:16.097444       1 main.go:301] handling current node
	I1202 15:17:26.097532       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:17:26.097572       1 main.go:301] handling current node
	I1202 15:17:36.098071       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:17:36.098109       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d4ee4d2470fd104c16bcc7fe722d5ffc59c1d0a056ffed4d4587e05f1855bf93] <==
	 > logger="UnhandledError"
	E1202 15:16:40.567493       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.244.38:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.244.38:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.244.38:443: connect: connection refused" logger="UnhandledError"
	E1202 15:16:40.569168       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.244.38:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.244.38:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.244.38:443: connect: connection refused" logger="UnhandledError"
	W1202 15:16:41.567786       1 handler_proxy.go:99] no RequestInfo found in the context
	W1202 15:16:41.567803       1 handler_proxy.go:99] no RequestInfo found in the context
	E1202 15:16:41.568057       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1202 15:16:41.568088       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1202 15:16:41.568096       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1202 15:16:41.569243       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1202 15:16:45.580284       1 handler_proxy.go:99] no RequestInfo found in the context
	E1202 15:16:45.580337       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1202 15:16:45.580362       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.244.38:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.244.38:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	I1202 15:16:45.588894       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1202 15:16:52.651371       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1202 15:16:52.658151       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1202 15:16:52.672312       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1202 15:16:52.678896       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	E1202 15:17:34.769812       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:39610: use of closed network connection
	E1202 15:17:34.927363       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:39642: use of closed network connection
	
	
	==> kube-controller-manager [8ae5e65fa7abba6b7bd24be1ea23cf338ad905d35d5835b7da18a1374d635911] <==
	I1202 15:16:24.104224       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1202 15:16:24.104160       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1202 15:16:24.104774       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1202 15:16:24.108686       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 15:16:24.108772       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 15:16:24.108789       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1202 15:16:24.108800       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1202 15:16:24.108802       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1202 15:16:24.115254       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 15:16:24.123532       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1202 15:16:24.129812       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1202 15:16:24.135190       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1202 15:16:24.142446       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 15:16:24.150882       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1202 15:16:24.154129       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1202 15:16:24.154140       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1202 15:16:24.154157       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1202 15:16:24.154219       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1202 15:16:24.154225       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1202 15:16:39.105417       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1202 15:16:54.121646       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1202 15:16:54.121738       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1202 15:16:54.153720       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1202 15:16:54.222652       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 15:16:54.254132       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [0d87471c625dcf0f8df21886299158a0c0d136ac58fb47ca1dfde4ddef6434ad] <==
	I1202 15:16:25.853277       1 server_linux.go:53] "Using iptables proxy"
	I1202 15:16:25.962494       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 15:16:26.063134       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 15:16:26.063845       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1202 15:16:26.063975       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 15:16:26.087762       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 15:16:26.087816       1 server_linux.go:132] "Using iptables Proxier"
	I1202 15:16:26.094097       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 15:16:26.094560       1 server.go:527] "Version info" version="v1.34.2"
	I1202 15:16:26.094604       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 15:16:26.096806       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 15:16:26.096839       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 15:16:26.096857       1 config.go:200] "Starting service config controller"
	I1202 15:16:26.096864       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 15:16:26.096892       1 config.go:106] "Starting endpoint slice config controller"
	I1202 15:16:26.096897       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 15:16:26.097227       1 config.go:309] "Starting node config controller"
	I1202 15:16:26.097239       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 15:16:26.097252       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 15:16:26.197515       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 15:16:26.197514       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 15:16:26.197563       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2ad3385ae6c4074333e7cd6e406cceedb26093169a52ca39f8c4e0168ed2a9eb] <==
	E1202 15:16:17.124893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1202 15:16:17.124921       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1202 15:16:17.124954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1202 15:16:17.124974       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1202 15:16:17.124982       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1202 15:16:17.124998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1202 15:16:17.125004       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1202 15:16:17.125052       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1202 15:16:17.125080       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1202 15:16:17.125152       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1202 15:16:17.125234       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1202 15:16:17.125260       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1202 15:16:17.996089       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1202 15:16:18.021360       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1202 15:16:18.026918       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1202 15:16:18.078405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1202 15:16:18.110458       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1202 15:16:18.200195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1202 15:16:18.240698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1202 15:16:18.249838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1202 15:16:18.306060       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1202 15:16:18.310110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1202 15:16:18.370933       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1202 15:16:18.375027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1202 15:16:20.821964       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 02 15:17:05 addons-141726 kubelet[1278]: I1202 15:17:05.647096    1278 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-5f7fs" secret="" err="secret \"gcp-auth\" not found"
	Dec 02 15:17:05 addons-141726 kubelet[1278]: I1202 15:17:05.657971    1278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/amd-gpu-device-plugin-5f7fs" podStartSLOduration=1.739362302 podStartE2EDuration="29.657953652s" podCreationTimestamp="2025-12-02 15:16:36 +0000 UTC" firstStartedPulling="2025-12-02 15:16:36.929317183 +0000 UTC m=+17.587676961" lastFinishedPulling="2025-12-02 15:17:04.847908541 +0000 UTC m=+45.506268311" observedRunningTime="2025-12-02 15:17:05.657131576 +0000 UTC m=+46.315491362" watchObservedRunningTime="2025-12-02 15:17:05.657953652 +0000 UTC m=+46.316313436"
	Dec 02 15:17:06 addons-141726 kubelet[1278]: I1202 15:17:06.650334    1278 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-5f7fs" secret="" err="secret \"gcp-auth\" not found"
	Dec 02 15:17:07 addons-141726 kubelet[1278]: I1202 15:17:07.657115    1278 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-md75n" secret="" err="secret \"gcp-auth\" not found"
	Dec 02 15:17:07 addons-141726 kubelet[1278]: I1202 15:17:07.667070    1278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-md75n" podStartSLOduration=1.133848437 podStartE2EDuration="31.667049485s" podCreationTimestamp="2025-12-02 15:16:36 +0000 UTC" firstStartedPulling="2025-12-02 15:16:37.010565882 +0000 UTC m=+17.668925652" lastFinishedPulling="2025-12-02 15:17:07.543766935 +0000 UTC m=+48.202126700" observedRunningTime="2025-12-02 15:17:07.666747223 +0000 UTC m=+48.325107010" watchObservedRunningTime="2025-12-02 15:17:07.667049485 +0000 UTC m=+48.325409270"
	Dec 02 15:17:08 addons-141726 kubelet[1278]: E1202 15:17:08.334790    1278 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Dec 02 15:17:08 addons-141726 kubelet[1278]: E1202 15:17:08.334882    1278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39dd35ee-37c3-4b6e-a06a-17ebd9a9bf35-gcr-creds podName:39dd35ee-37c3-4b6e-a06a-17ebd9a9bf35 nodeName:}" failed. No retries permitted until 2025-12-02 15:17:40.33486586 +0000 UTC m=+80.993225625 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/39dd35ee-37c3-4b6e-a06a-17ebd9a9bf35-gcr-creds") pod "registry-creds-764b6fb674-pw2zl" (UID: "39dd35ee-37c3-4b6e-a06a-17ebd9a9bf35") : secret "registry-creds-gcr" not found
	Dec 02 15:17:08 addons-141726 kubelet[1278]: I1202 15:17:08.660773    1278 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-md75n" secret="" err="secret \"gcp-auth\" not found"
	Dec 02 15:17:10 addons-141726 kubelet[1278]: I1202 15:17:10.687117    1278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-sbzvc" podStartSLOduration=17.10029957 podStartE2EDuration="45.687092816s" podCreationTimestamp="2025-12-02 15:16:25 +0000 UTC" firstStartedPulling="2025-12-02 15:16:41.954960399 +0000 UTC m=+22.613320176" lastFinishedPulling="2025-12-02 15:17:10.541753638 +0000 UTC m=+51.200113422" observedRunningTime="2025-12-02 15:17:10.68636117 +0000 UTC m=+51.344721015" watchObservedRunningTime="2025-12-02 15:17:10.687092816 +0000 UTC m=+51.345452604"
	Dec 02 15:17:11 addons-141726 kubelet[1278]: I1202 15:17:11.426675    1278 scope.go:117] "RemoveContainer" containerID="67a5fbcc990c64576c6ee12fe622d7fe0de5fe9b12083fae98874a4acd68c37a"
	Dec 02 15:17:11 addons-141726 kubelet[1278]: I1202 15:17:11.680122    1278 scope.go:117] "RemoveContainer" containerID="67a5fbcc990c64576c6ee12fe622d7fe0de5fe9b12083fae98874a4acd68c37a"
	Dec 02 15:17:13 addons-141726 kubelet[1278]: I1202 15:17:13.277854    1278 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpgwr\" (UniqueName: \"kubernetes.io/projected/c837eee8-e9d8-459a-891d-31d108d3f03b-kube-api-access-rpgwr\") pod \"c837eee8-e9d8-459a-891d-31d108d3f03b\" (UID: \"c837eee8-e9d8-459a-891d-31d108d3f03b\") "
	Dec 02 15:17:13 addons-141726 kubelet[1278]: I1202 15:17:13.280270    1278 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c837eee8-e9d8-459a-891d-31d108d3f03b-kube-api-access-rpgwr" (OuterVolumeSpecName: "kube-api-access-rpgwr") pod "c837eee8-e9d8-459a-891d-31d108d3f03b" (UID: "c837eee8-e9d8-459a-891d-31d108d3f03b"). InnerVolumeSpecName "kube-api-access-rpgwr". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 02 15:17:13 addons-141726 kubelet[1278]: I1202 15:17:13.378584    1278 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rpgwr\" (UniqueName: \"kubernetes.io/projected/c837eee8-e9d8-459a-891d-31d108d3f03b-kube-api-access-rpgwr\") on node \"addons-141726\" DevicePath \"\""
	Dec 02 15:17:13 addons-141726 kubelet[1278]: I1202 15:17:13.688166    1278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e67980403726c1031bff563b8a5dd84638da24dbc875056efe282d6d7c80e4f7"
	Dec 02 15:17:17 addons-141726 kubelet[1278]: I1202 15:17:17.727388    1278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-6c8bf45fb-hqxvp" podStartSLOduration=30.100446527 podStartE2EDuration="51.727364346s" podCreationTimestamp="2025-12-02 15:16:26 +0000 UTC" firstStartedPulling="2025-12-02 15:16:52.576948441 +0000 UTC m=+33.235308217" lastFinishedPulling="2025-12-02 15:17:14.203866265 +0000 UTC m=+54.862226036" observedRunningTime="2025-12-02 15:17:14.710047899 +0000 UTC m=+55.368407685" watchObservedRunningTime="2025-12-02 15:17:17.727364346 +0000 UTC m=+58.385724134"
	Dec 02 15:17:17 addons-141726 kubelet[1278]: I1202 15:17:17.728391    1278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-v79fk" podStartSLOduration=37.291329622 podStartE2EDuration="45.728369228s" podCreationTimestamp="2025-12-02 15:16:32 +0000 UTC" firstStartedPulling="2025-12-02 15:17:08.848389564 +0000 UTC m=+49.506749346" lastFinishedPulling="2025-12-02 15:17:17.285429173 +0000 UTC m=+57.943788952" observedRunningTime="2025-12-02 15:17:17.72572101 +0000 UTC m=+58.384080798" watchObservedRunningTime="2025-12-02 15:17:17.728369228 +0000 UTC m=+58.386729014"
	Dec 02 15:17:19 addons-141726 kubelet[1278]: I1202 15:17:19.472106    1278 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Dec 02 15:17:19 addons-141726 kubelet[1278]: I1202 15:17:19.472165    1278 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Dec 02 15:17:21 addons-141726 kubelet[1278]: I1202 15:17:21.756307    1278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-kdbl4" podStartSLOduration=1.820349057 podStartE2EDuration="45.756289014s" podCreationTimestamp="2025-12-02 15:16:36 +0000 UTC" firstStartedPulling="2025-12-02 15:16:36.920505432 +0000 UTC m=+17.578865209" lastFinishedPulling="2025-12-02 15:17:20.8564454 +0000 UTC m=+61.514805166" observedRunningTime="2025-12-02 15:17:21.755648829 +0000 UTC m=+62.414008638" watchObservedRunningTime="2025-12-02 15:17:21.756289014 +0000 UTC m=+62.414648802"
	Dec 02 15:17:24 addons-141726 kubelet[1278]: I1202 15:17:24.660370    1278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/68f402eb-f188-423f-828c-892475faf6db-gcp-creds\") pod \"busybox\" (UID: \"68f402eb-f188-423f-828c-892475faf6db\") " pod="default/busybox"
	Dec 02 15:17:24 addons-141726 kubelet[1278]: I1202 15:17:24.660533    1278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5lrf\" (UniqueName: \"kubernetes.io/projected/68f402eb-f188-423f-828c-892475faf6db-kube-api-access-d5lrf\") pod \"busybox\" (UID: \"68f402eb-f188-423f-828c-892475faf6db\") " pod="default/busybox"
	Dec 02 15:17:27 addons-141726 kubelet[1278]: I1202 15:17:27.782225    1278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.8287607270000001 podStartE2EDuration="3.782208427s" podCreationTimestamp="2025-12-02 15:17:24 +0000 UTC" firstStartedPulling="2025-12-02 15:17:24.948605401 +0000 UTC m=+65.606965178" lastFinishedPulling="2025-12-02 15:17:26.902053113 +0000 UTC m=+67.560412878" observedRunningTime="2025-12-02 15:17:27.780988373 +0000 UTC m=+68.439348159" watchObservedRunningTime="2025-12-02 15:17:27.782208427 +0000 UTC m=+68.440568211"
	Dec 02 15:17:33 addons-141726 kubelet[1278]: I1202 15:17:33.428010    1278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4890f5b4-863b-4189-99e5-2e0e99f6c1d6" path="/var/lib/kubelet/pods/4890f5b4-863b-4189-99e5-2e0e99f6c1d6/volumes"
	Dec 02 15:17:34 addons-141726 kubelet[1278]: E1202 15:17:34.927269    1278 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:32784->127.0.0.1:35321: write tcp 127.0.0.1:32784->127.0.0.1:35321: write: broken pipe
	
	
	==> storage-provisioner [ed6c258d3fc965340e6765fb91d648d86cbd0ec27ffbf12d3f7f75dc84c42fe2] <==
	W1202 15:17:11.290271       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:17:13.293736       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:17:13.298674       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:17:15.301658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:17:15.307077       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:17:17.310518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:17:17.314454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:17:19.318509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:17:19.324158       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:17:21.327559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:17:21.332299       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:17:23.335636       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:17:23.339565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:17:25.342589       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:17:25.346390       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:17:27.349739       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:17:27.354936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:17:29.357693       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:17:29.361949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:17:31.364564       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:17:31.369961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:17:33.372864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:17:33.376750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:17:35.379823       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:17:35.385619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-141726 -n addons-141726
helpers_test.go:269: (dbg) Run:  kubectl --context addons-141726 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: gcp-auth-certs-patch-tblv4 ingress-nginx-admission-create-xjhbl ingress-nginx-admission-patch-dz5cl registry-creds-764b6fb674-pw2zl
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-141726 describe pod gcp-auth-certs-patch-tblv4 ingress-nginx-admission-create-xjhbl ingress-nginx-admission-patch-dz5cl registry-creds-764b6fb674-pw2zl
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-141726 describe pod gcp-auth-certs-patch-tblv4 ingress-nginx-admission-create-xjhbl ingress-nginx-admission-patch-dz5cl registry-creds-764b6fb674-pw2zl: exit status 1 (62.758188ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "gcp-auth-certs-patch-tblv4" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-xjhbl" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-dz5cl" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-pw2zl" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-141726 describe pod gcp-auth-certs-patch-tblv4 ingress-nginx-admission-create-xjhbl ingress-nginx-admission-patch-dz5cl registry-creds-764b6fb674-pw2zl: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-141726 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-141726 addons disable headlamp --alsologtostderr -v=1: exit status 11 (259.34763ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 15:17:37.600901  278778 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:17:37.601023  278778 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:17:37.601031  278778 out.go:374] Setting ErrFile to fd 2...
	I1202 15:17:37.601035  278778 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:17:37.601268  278778 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 15:17:37.601552  278778 mustload.go:66] Loading cluster: addons-141726
	I1202 15:17:37.601887  278778 config.go:182] Loaded profile config "addons-141726": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 15:17:37.601908  278778 addons.go:622] checking whether the cluster is paused
	I1202 15:17:37.601983  278778 config.go:182] Loaded profile config "addons-141726": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 15:17:37.602000  278778 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:17:37.602411  278778 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:17:37.621663  278778 ssh_runner.go:195] Run: systemctl --version
	I1202 15:17:37.621734  278778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:17:37.640779  278778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:17:37.740229  278778 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 15:17:37.740303  278778 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 15:17:37.773947  278778 cri.go:89] found id: "5412cbcb9dad23c931f90a92d80b1e256500b274b20fcff6807ec93ce486087b"
	I1202 15:17:37.773973  278778 cri.go:89] found id: "584adcb3687a901fe5060be0a1fb1600c34509b47bb4484486d6cd5d48c6ffad"
	I1202 15:17:37.773980  278778 cri.go:89] found id: "00fe2f1035e365363b059dcce9f3e0b81a3ad886b8c75acee254b56191e6e863"
	I1202 15:17:37.773986  278778 cri.go:89] found id: "c829c27b2be0ca5be71ec02d7cf4e0c49251f9e60910a0e3454bbcddca4fafcd"
	I1202 15:17:37.774002  278778 cri.go:89] found id: "3307ae3898cde405eec22b3674ace2b05333d6bbcffefc76a3d7305a9043c2e4"
	I1202 15:17:37.774008  278778 cri.go:89] found id: "874bcd460b4cc02e9dad7f167a328196ff5beb9a1012aaf8f5f4f2a393d906b8"
	I1202 15:17:37.774014  278778 cri.go:89] found id: "a087fdb2d51ae5fccdb5086d81846803a6e8c037c736644f24867541b19508e7"
	I1202 15:17:37.774019  278778 cri.go:89] found id: "44579fbc5bf765a529aaedb60d661600810b558ed14b53a2791275b051e41cea"
	I1202 15:17:37.774025  278778 cri.go:89] found id: "398f34ffd447f6b12322f761f0de9ff3970accc3446665dd1322ad321ade5e55"
	I1202 15:17:37.774035  278778 cri.go:89] found id: "51d82f122a46867d50393b13527c7d9616831cbadd2d989162e47bf6ff9995bf"
	I1202 15:17:37.774045  278778 cri.go:89] found id: "1433ec009789d9dece3a29309283858693fe9891c2c0deadf76da3ccde6e4d3a"
	I1202 15:17:37.774051  278778 cri.go:89] found id: "f7167c650b5e128373caeda86412c6686ea43f9fca94eebbea1e6330f1681df7"
	I1202 15:17:37.774060  278778 cri.go:89] found id: "ba208fa7fba7a8aa77172696fd226f8981ccd6cb050eebd1eaf36ffd634ae40b"
	I1202 15:17:37.774067  278778 cri.go:89] found id: "515f2711c25082029bfb73f256ab315837df06ad3ce2e28c43e6ca4f915ff98a"
	I1202 15:17:37.774075  278778 cri.go:89] found id: "db056cf136978dcdc941d292176e5dd0ae726b09d2af66bd9cfe26cd49867515"
	I1202 15:17:37.774090  278778 cri.go:89] found id: "8aef252bd37ce68223492a3d106cdc49f9000fcca914af4aed5855230552d3cf"
	I1202 15:17:37.774099  278778 cri.go:89] found id: "ed6c258d3fc965340e6765fb91d648d86cbd0ec27ffbf12d3f7f75dc84c42fe2"
	I1202 15:17:37.774113  278778 cri.go:89] found id: "62cac40636a5b133526a2d722e6709f00017736dd1cfc7e3133b26af70363e46"
	I1202 15:17:37.774118  278778 cri.go:89] found id: "0d87471c625dcf0f8df21886299158a0c0d136ac58fb47ca1dfde4ddef6434ad"
	I1202 15:17:37.774123  278778 cri.go:89] found id: "ad0c6b77e41e3a16f35df40501b8e69476519899dc85d64c8b2cf07c30b31ce4"
	I1202 15:17:37.774131  278778 cri.go:89] found id: "8ae5e65fa7abba6b7bd24be1ea23cf338ad905d35d5835b7da18a1374d635911"
	I1202 15:17:37.774140  278778 cri.go:89] found id: "d4ee4d2470fd104c16bcc7fe722d5ffc59c1d0a056ffed4d4587e05f1855bf93"
	I1202 15:17:37.774146  278778 cri.go:89] found id: "762c736ec2bae28c970f9e38d7f5c0753e1d54378a7fa586b85636f18b0e547e"
	I1202 15:17:37.774158  278778 cri.go:89] found id: "2ad3385ae6c4074333e7cd6e406cceedb26093169a52ca39f8c4e0168ed2a9eb"
	I1202 15:17:37.774173  278778 cri.go:89] found id: ""
	I1202 15:17:37.774231  278778 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 15:17:37.789120  278778 out.go:203] 
	W1202 15:17:37.790064  278778 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T15:17:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T15:17:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 15:17:37.790082  278778 out.go:285] * 
	* 
	W1202 15:17:37.793264  278778 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 15:17:37.794558  278778 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-141726 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.62s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-rjbxm" [46a6a6b6-f04e-45de-8e84-24727662a1ad] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003581991s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-141726 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-141726 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (251.98942ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 15:17:46.516794  279345 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:17:46.517099  279345 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:17:46.517111  279345 out.go:374] Setting ErrFile to fd 2...
	I1202 15:17:46.517115  279345 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:17:46.517336  279345 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 15:17:46.517678  279345 mustload.go:66] Loading cluster: addons-141726
	I1202 15:17:46.517997  279345 config.go:182] Loaded profile config "addons-141726": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 15:17:46.518017  279345 addons.go:622] checking whether the cluster is paused
	I1202 15:17:46.518096  279345 config.go:182] Loaded profile config "addons-141726": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 15:17:46.518112  279345 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:17:46.518488  279345 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:17:46.536778  279345 ssh_runner.go:195] Run: systemctl --version
	I1202 15:17:46.536850  279345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:17:46.555388  279345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:17:46.655048  279345 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 15:17:46.655151  279345 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 15:17:46.685728  279345 cri.go:89] found id: "5412cbcb9dad23c931f90a92d80b1e256500b274b20fcff6807ec93ce486087b"
	I1202 15:17:46.685749  279345 cri.go:89] found id: "584adcb3687a901fe5060be0a1fb1600c34509b47bb4484486d6cd5d48c6ffad"
	I1202 15:17:46.685754  279345 cri.go:89] found id: "00fe2f1035e365363b059dcce9f3e0b81a3ad886b8c75acee254b56191e6e863"
	I1202 15:17:46.685757  279345 cri.go:89] found id: "c829c27b2be0ca5be71ec02d7cf4e0c49251f9e60910a0e3454bbcddca4fafcd"
	I1202 15:17:46.685760  279345 cri.go:89] found id: "3307ae3898cde405eec22b3674ace2b05333d6bbcffefc76a3d7305a9043c2e4"
	I1202 15:17:46.685763  279345 cri.go:89] found id: "874bcd460b4cc02e9dad7f167a328196ff5beb9a1012aaf8f5f4f2a393d906b8"
	I1202 15:17:46.685777  279345 cri.go:89] found id: "a087fdb2d51ae5fccdb5086d81846803a6e8c037c736644f24867541b19508e7"
	I1202 15:17:46.685780  279345 cri.go:89] found id: "44579fbc5bf765a529aaedb60d661600810b558ed14b53a2791275b051e41cea"
	I1202 15:17:46.685783  279345 cri.go:89] found id: "398f34ffd447f6b12322f761f0de9ff3970accc3446665dd1322ad321ade5e55"
	I1202 15:17:46.685789  279345 cri.go:89] found id: "51d82f122a46867d50393b13527c7d9616831cbadd2d989162e47bf6ff9995bf"
	I1202 15:17:46.685791  279345 cri.go:89] found id: "1433ec009789d9dece3a29309283858693fe9891c2c0deadf76da3ccde6e4d3a"
	I1202 15:17:46.685794  279345 cri.go:89] found id: "f7167c650b5e128373caeda86412c6686ea43f9fca94eebbea1e6330f1681df7"
	I1202 15:17:46.685797  279345 cri.go:89] found id: "ba208fa7fba7a8aa77172696fd226f8981ccd6cb050eebd1eaf36ffd634ae40b"
	I1202 15:17:46.685800  279345 cri.go:89] found id: "515f2711c25082029bfb73f256ab315837df06ad3ce2e28c43e6ca4f915ff98a"
	I1202 15:17:46.685804  279345 cri.go:89] found id: "db056cf136978dcdc941d292176e5dd0ae726b09d2af66bd9cfe26cd49867515"
	I1202 15:17:46.685811  279345 cri.go:89] found id: "8aef252bd37ce68223492a3d106cdc49f9000fcca914af4aed5855230552d3cf"
	I1202 15:17:46.685817  279345 cri.go:89] found id: "ed6c258d3fc965340e6765fb91d648d86cbd0ec27ffbf12d3f7f75dc84c42fe2"
	I1202 15:17:46.685820  279345 cri.go:89] found id: "62cac40636a5b133526a2d722e6709f00017736dd1cfc7e3133b26af70363e46"
	I1202 15:17:46.685823  279345 cri.go:89] found id: "0d87471c625dcf0f8df21886299158a0c0d136ac58fb47ca1dfde4ddef6434ad"
	I1202 15:17:46.685826  279345 cri.go:89] found id: "ad0c6b77e41e3a16f35df40501b8e69476519899dc85d64c8b2cf07c30b31ce4"
	I1202 15:17:46.685829  279345 cri.go:89] found id: "8ae5e65fa7abba6b7bd24be1ea23cf338ad905d35d5835b7da18a1374d635911"
	I1202 15:17:46.685831  279345 cri.go:89] found id: "d4ee4d2470fd104c16bcc7fe722d5ffc59c1d0a056ffed4d4587e05f1855bf93"
	I1202 15:17:46.685834  279345 cri.go:89] found id: "762c736ec2bae28c970f9e38d7f5c0753e1d54378a7fa586b85636f18b0e547e"
	I1202 15:17:46.685837  279345 cri.go:89] found id: "2ad3385ae6c4074333e7cd6e406cceedb26093169a52ca39f8c4e0168ed2a9eb"
	I1202 15:17:46.685840  279345 cri.go:89] found id: ""
	I1202 15:17:46.685878  279345 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 15:17:46.699897  279345 out.go:203] 
	W1202 15:17:46.701087  279345 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T15:17:46Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T15:17:46Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 15:17:46.701107  279345 out.go:285] * 
	* 
	W1202 15:17:46.704288  279345 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 15:17:46.705528  279345 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-141726 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (6.26s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.15s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-141726 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-141726 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-141726 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-141726 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-141726 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-141726 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-141726 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-141726 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [07b6979b-b0e1-4785-a8bc-d8c037e14333] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [07b6979b-b0e1-4785-a8bc-d8c037e14333] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [07b6979b-b0e1-4785-a8bc-d8c037e14333] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.002969846s
addons_test.go:967: (dbg) Run:  kubectl --context addons-141726 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-141726 ssh "cat /opt/local-path-provisioner/pvc-3465dce8-839e-40a0-b246-a6443acf23da_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-141726 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-141726 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-141726 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-141726 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (265.752863ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 15:17:47.750832  279625 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:17:47.751015  279625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:17:47.751028  279625 out.go:374] Setting ErrFile to fd 2...
	I1202 15:17:47.751033  279625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:17:47.751273  279625 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 15:17:47.751616  279625 mustload.go:66] Loading cluster: addons-141726
	I1202 15:17:47.752109  279625 config.go:182] Loaded profile config "addons-141726": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 15:17:47.752141  279625 addons.go:622] checking whether the cluster is paused
	I1202 15:17:47.752275  279625 config.go:182] Loaded profile config "addons-141726": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 15:17:47.752299  279625 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:17:47.752828  279625 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:17:47.772311  279625 ssh_runner.go:195] Run: systemctl --version
	I1202 15:17:47.772360  279625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:17:47.790712  279625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:17:47.891999  279625 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 15:17:47.892080  279625 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 15:17:47.921693  279625 cri.go:89] found id: "5412cbcb9dad23c931f90a92d80b1e256500b274b20fcff6807ec93ce486087b"
	I1202 15:17:47.921721  279625 cri.go:89] found id: "584adcb3687a901fe5060be0a1fb1600c34509b47bb4484486d6cd5d48c6ffad"
	I1202 15:17:47.921728  279625 cri.go:89] found id: "00fe2f1035e365363b059dcce9f3e0b81a3ad886b8c75acee254b56191e6e863"
	I1202 15:17:47.921733  279625 cri.go:89] found id: "c829c27b2be0ca5be71ec02d7cf4e0c49251f9e60910a0e3454bbcddca4fafcd"
	I1202 15:17:47.921738  279625 cri.go:89] found id: "3307ae3898cde405eec22b3674ace2b05333d6bbcffefc76a3d7305a9043c2e4"
	I1202 15:17:47.921743  279625 cri.go:89] found id: "874bcd460b4cc02e9dad7f167a328196ff5beb9a1012aaf8f5f4f2a393d906b8"
	I1202 15:17:47.921747  279625 cri.go:89] found id: "a087fdb2d51ae5fccdb5086d81846803a6e8c037c736644f24867541b19508e7"
	I1202 15:17:47.921751  279625 cri.go:89] found id: "44579fbc5bf765a529aaedb60d661600810b558ed14b53a2791275b051e41cea"
	I1202 15:17:47.921755  279625 cri.go:89] found id: "398f34ffd447f6b12322f761f0de9ff3970accc3446665dd1322ad321ade5e55"
	I1202 15:17:47.921760  279625 cri.go:89] found id: "51d82f122a46867d50393b13527c7d9616831cbadd2d989162e47bf6ff9995bf"
	I1202 15:17:47.921764  279625 cri.go:89] found id: "1433ec009789d9dece3a29309283858693fe9891c2c0deadf76da3ccde6e4d3a"
	I1202 15:17:47.921766  279625 cri.go:89] found id: "f7167c650b5e128373caeda86412c6686ea43f9fca94eebbea1e6330f1681df7"
	I1202 15:17:47.921769  279625 cri.go:89] found id: "ba208fa7fba7a8aa77172696fd226f8981ccd6cb050eebd1eaf36ffd634ae40b"
	I1202 15:17:47.921772  279625 cri.go:89] found id: "515f2711c25082029bfb73f256ab315837df06ad3ce2e28c43e6ca4f915ff98a"
	I1202 15:17:47.921774  279625 cri.go:89] found id: "db056cf136978dcdc941d292176e5dd0ae726b09d2af66bd9cfe26cd49867515"
	I1202 15:17:47.921779  279625 cri.go:89] found id: "8aef252bd37ce68223492a3d106cdc49f9000fcca914af4aed5855230552d3cf"
	I1202 15:17:47.921782  279625 cri.go:89] found id: "ed6c258d3fc965340e6765fb91d648d86cbd0ec27ffbf12d3f7f75dc84c42fe2"
	I1202 15:17:47.921787  279625 cri.go:89] found id: "62cac40636a5b133526a2d722e6709f00017736dd1cfc7e3133b26af70363e46"
	I1202 15:17:47.921789  279625 cri.go:89] found id: "0d87471c625dcf0f8df21886299158a0c0d136ac58fb47ca1dfde4ddef6434ad"
	I1202 15:17:47.921792  279625 cri.go:89] found id: "ad0c6b77e41e3a16f35df40501b8e69476519899dc85d64c8b2cf07c30b31ce4"
	I1202 15:17:47.921803  279625 cri.go:89] found id: "8ae5e65fa7abba6b7bd24be1ea23cf338ad905d35d5835b7da18a1374d635911"
	I1202 15:17:47.921805  279625 cri.go:89] found id: "d4ee4d2470fd104c16bcc7fe722d5ffc59c1d0a056ffed4d4587e05f1855bf93"
	I1202 15:17:47.921811  279625 cri.go:89] found id: "762c736ec2bae28c970f9e38d7f5c0753e1d54378a7fa586b85636f18b0e547e"
	I1202 15:17:47.921815  279625 cri.go:89] found id: "2ad3385ae6c4074333e7cd6e406cceedb26093169a52ca39f8c4e0168ed2a9eb"
	I1202 15:17:47.921818  279625 cri.go:89] found id: ""
	I1202 15:17:47.921869  279625 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 15:17:47.937786  279625 out.go:203] 
	W1202 15:17:47.938961  279625 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T15:17:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T15:17:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 15:17:47.938991  279625 out.go:285] * 
	* 
	W1202 15:17:47.943676  279625 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 15:17:47.944982  279625 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-141726 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (10.15s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-gdvkl" [387a816e-abc8-433b-85b4-4c9d2df06ea3] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.0044516s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-141726 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-141726 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (259.638288ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 15:17:40.248144  278960 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:17:40.248265  278960 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:17:40.248272  278960 out.go:374] Setting ErrFile to fd 2...
	I1202 15:17:40.248278  278960 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:17:40.249077  278960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 15:17:40.249349  278960 mustload.go:66] Loading cluster: addons-141726
	I1202 15:17:40.249747  278960 config.go:182] Loaded profile config "addons-141726": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 15:17:40.249773  278960 addons.go:622] checking whether the cluster is paused
	I1202 15:17:40.249866  278960 config.go:182] Loaded profile config "addons-141726": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 15:17:40.249882  278960 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:17:40.250258  278960 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:17:40.269773  278960 ssh_runner.go:195] Run: systemctl --version
	I1202 15:17:40.269828  278960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:17:40.289466  278960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:17:40.390194  278960 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 15:17:40.390275  278960 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 15:17:40.421709  278960 cri.go:89] found id: "5412cbcb9dad23c931f90a92d80b1e256500b274b20fcff6807ec93ce486087b"
	I1202 15:17:40.421747  278960 cri.go:89] found id: "584adcb3687a901fe5060be0a1fb1600c34509b47bb4484486d6cd5d48c6ffad"
	I1202 15:17:40.421753  278960 cri.go:89] found id: "00fe2f1035e365363b059dcce9f3e0b81a3ad886b8c75acee254b56191e6e863"
	I1202 15:17:40.421758  278960 cri.go:89] found id: "c829c27b2be0ca5be71ec02d7cf4e0c49251f9e60910a0e3454bbcddca4fafcd"
	I1202 15:17:40.421763  278960 cri.go:89] found id: "3307ae3898cde405eec22b3674ace2b05333d6bbcffefc76a3d7305a9043c2e4"
	I1202 15:17:40.421768  278960 cri.go:89] found id: "874bcd460b4cc02e9dad7f167a328196ff5beb9a1012aaf8f5f4f2a393d906b8"
	I1202 15:17:40.421772  278960 cri.go:89] found id: "a087fdb2d51ae5fccdb5086d81846803a6e8c037c736644f24867541b19508e7"
	I1202 15:17:40.421776  278960 cri.go:89] found id: "44579fbc5bf765a529aaedb60d661600810b558ed14b53a2791275b051e41cea"
	I1202 15:17:40.421781  278960 cri.go:89] found id: "398f34ffd447f6b12322f761f0de9ff3970accc3446665dd1322ad321ade5e55"
	I1202 15:17:40.421801  278960 cri.go:89] found id: "51d82f122a46867d50393b13527c7d9616831cbadd2d989162e47bf6ff9995bf"
	I1202 15:17:40.421807  278960 cri.go:89] found id: "1433ec009789d9dece3a29309283858693fe9891c2c0deadf76da3ccde6e4d3a"
	I1202 15:17:40.421811  278960 cri.go:89] found id: "f7167c650b5e128373caeda86412c6686ea43f9fca94eebbea1e6330f1681df7"
	I1202 15:17:40.421816  278960 cri.go:89] found id: "ba208fa7fba7a8aa77172696fd226f8981ccd6cb050eebd1eaf36ffd634ae40b"
	I1202 15:17:40.421820  278960 cri.go:89] found id: "515f2711c25082029bfb73f256ab315837df06ad3ce2e28c43e6ca4f915ff98a"
	I1202 15:17:40.421825  278960 cri.go:89] found id: "db056cf136978dcdc941d292176e5dd0ae726b09d2af66bd9cfe26cd49867515"
	I1202 15:17:40.421840  278960 cri.go:89] found id: "8aef252bd37ce68223492a3d106cdc49f9000fcca914af4aed5855230552d3cf"
	I1202 15:17:40.421848  278960 cri.go:89] found id: "ed6c258d3fc965340e6765fb91d648d86cbd0ec27ffbf12d3f7f75dc84c42fe2"
	I1202 15:17:40.421854  278960 cri.go:89] found id: "62cac40636a5b133526a2d722e6709f00017736dd1cfc7e3133b26af70363e46"
	I1202 15:17:40.421859  278960 cri.go:89] found id: "0d87471c625dcf0f8df21886299158a0c0d136ac58fb47ca1dfde4ddef6434ad"
	I1202 15:17:40.421863  278960 cri.go:89] found id: "ad0c6b77e41e3a16f35df40501b8e69476519899dc85d64c8b2cf07c30b31ce4"
	I1202 15:17:40.421870  278960 cri.go:89] found id: "8ae5e65fa7abba6b7bd24be1ea23cf338ad905d35d5835b7da18a1374d635911"
	I1202 15:17:40.421874  278960 cri.go:89] found id: "d4ee4d2470fd104c16bcc7fe722d5ffc59c1d0a056ffed4d4587e05f1855bf93"
	I1202 15:17:40.421882  278960 cri.go:89] found id: "762c736ec2bae28c970f9e38d7f5c0753e1d54378a7fa586b85636f18b0e547e"
	I1202 15:17:40.421886  278960 cri.go:89] found id: "2ad3385ae6c4074333e7cd6e406cceedb26093169a52ca39f8c4e0168ed2a9eb"
	I1202 15:17:40.421890  278960 cri.go:89] found id: ""
	I1202 15:17:40.421954  278960 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 15:17:40.437059  278960 out.go:203] 
	W1202 15:17:40.438788  278960 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T15:17:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T15:17:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 15:17:40.438819  278960 out.go:285] * 
	* 
	W1202 15:17:40.442014  278960 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 15:17:40.443593  278960 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-141726 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.27s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-xl2rh" [ce0fa247-8ace-490c-aa0b-84f7c911d514] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004041191s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-141726 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-141726 addons disable yakd --alsologtostderr -v=1: exit status 11 (255.777441ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 15:17:45.512891  279244 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:17:45.513134  279244 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:17:45.513143  279244 out.go:374] Setting ErrFile to fd 2...
	I1202 15:17:45.513148  279244 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:17:45.513388  279244 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 15:17:45.513681  279244 mustload.go:66] Loading cluster: addons-141726
	I1202 15:17:45.514036  279244 config.go:182] Loaded profile config "addons-141726": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 15:17:45.514056  279244 addons.go:622] checking whether the cluster is paused
	I1202 15:17:45.514134  279244 config.go:182] Loaded profile config "addons-141726": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 15:17:45.514150  279244 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:17:45.514577  279244 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:17:45.534153  279244 ssh_runner.go:195] Run: systemctl --version
	I1202 15:17:45.534222  279244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:17:45.551898  279244 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:17:45.652139  279244 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 15:17:45.652244  279244 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 15:17:45.684553  279244 cri.go:89] found id: "5412cbcb9dad23c931f90a92d80b1e256500b274b20fcff6807ec93ce486087b"
	I1202 15:17:45.684600  279244 cri.go:89] found id: "584adcb3687a901fe5060be0a1fb1600c34509b47bb4484486d6cd5d48c6ffad"
	I1202 15:17:45.684606  279244 cri.go:89] found id: "00fe2f1035e365363b059dcce9f3e0b81a3ad886b8c75acee254b56191e6e863"
	I1202 15:17:45.684612  279244 cri.go:89] found id: "c829c27b2be0ca5be71ec02d7cf4e0c49251f9e60910a0e3454bbcddca4fafcd"
	I1202 15:17:45.684616  279244 cri.go:89] found id: "3307ae3898cde405eec22b3674ace2b05333d6bbcffefc76a3d7305a9043c2e4"
	I1202 15:17:45.684623  279244 cri.go:89] found id: "874bcd460b4cc02e9dad7f167a328196ff5beb9a1012aaf8f5f4f2a393d906b8"
	I1202 15:17:45.684628  279244 cri.go:89] found id: "a087fdb2d51ae5fccdb5086d81846803a6e8c037c736644f24867541b19508e7"
	I1202 15:17:45.684632  279244 cri.go:89] found id: "44579fbc5bf765a529aaedb60d661600810b558ed14b53a2791275b051e41cea"
	I1202 15:17:45.684646  279244 cri.go:89] found id: "398f34ffd447f6b12322f761f0de9ff3970accc3446665dd1322ad321ade5e55"
	I1202 15:17:45.684659  279244 cri.go:89] found id: "51d82f122a46867d50393b13527c7d9616831cbadd2d989162e47bf6ff9995bf"
	I1202 15:17:45.684665  279244 cri.go:89] found id: "1433ec009789d9dece3a29309283858693fe9891c2c0deadf76da3ccde6e4d3a"
	I1202 15:17:45.684669  279244 cri.go:89] found id: "f7167c650b5e128373caeda86412c6686ea43f9fca94eebbea1e6330f1681df7"
	I1202 15:17:45.684672  279244 cri.go:89] found id: "ba208fa7fba7a8aa77172696fd226f8981ccd6cb050eebd1eaf36ffd634ae40b"
	I1202 15:17:45.684675  279244 cri.go:89] found id: "515f2711c25082029bfb73f256ab315837df06ad3ce2e28c43e6ca4f915ff98a"
	I1202 15:17:45.684678  279244 cri.go:89] found id: "db056cf136978dcdc941d292176e5dd0ae726b09d2af66bd9cfe26cd49867515"
	I1202 15:17:45.684696  279244 cri.go:89] found id: "8aef252bd37ce68223492a3d106cdc49f9000fcca914af4aed5855230552d3cf"
	I1202 15:17:45.684707  279244 cri.go:89] found id: "ed6c258d3fc965340e6765fb91d648d86cbd0ec27ffbf12d3f7f75dc84c42fe2"
	I1202 15:17:45.684714  279244 cri.go:89] found id: "62cac40636a5b133526a2d722e6709f00017736dd1cfc7e3133b26af70363e46"
	I1202 15:17:45.684719  279244 cri.go:89] found id: "0d87471c625dcf0f8df21886299158a0c0d136ac58fb47ca1dfde4ddef6434ad"
	I1202 15:17:45.684726  279244 cri.go:89] found id: "ad0c6b77e41e3a16f35df40501b8e69476519899dc85d64c8b2cf07c30b31ce4"
	I1202 15:17:45.684730  279244 cri.go:89] found id: "8ae5e65fa7abba6b7bd24be1ea23cf338ad905d35d5835b7da18a1374d635911"
	I1202 15:17:45.684738  279244 cri.go:89] found id: "d4ee4d2470fd104c16bcc7fe722d5ffc59c1d0a056ffed4d4587e05f1855bf93"
	I1202 15:17:45.684743  279244 cri.go:89] found id: "762c736ec2bae28c970f9e38d7f5c0753e1d54378a7fa586b85636f18b0e547e"
	I1202 15:17:45.684750  279244 cri.go:89] found id: "2ad3385ae6c4074333e7cd6e406cceedb26093169a52ca39f8c4e0168ed2a9eb"
	I1202 15:17:45.684755  279244 cri.go:89] found id: ""
	I1202 15:17:45.684824  279244 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 15:17:45.699859  279244 out.go:203] 
	W1202 15:17:45.701180  279244 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T15:17:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T15:17:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 15:17:45.701213  279244 out.go:285] * 
	* 
	W1202 15:17:45.704492  279244 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 15:17:45.705899  279244 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-141726 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.26s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-5f7fs" [f2b19fdb-b25c-4936-aabf-26c33a233e0e] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.004765038s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-141726 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-141726 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (260.00429ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 15:17:40.248304  278961 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:17:40.248497  278961 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:17:40.248504  278961 out.go:374] Setting ErrFile to fd 2...
	I1202 15:17:40.248510  278961 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:17:40.248786  278961 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 15:17:40.249101  278961 mustload.go:66] Loading cluster: addons-141726
	I1202 15:17:40.249503  278961 config.go:182] Loaded profile config "addons-141726": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 15:17:40.249528  278961 addons.go:622] checking whether the cluster is paused
	I1202 15:17:40.249614  278961 config.go:182] Loaded profile config "addons-141726": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 15:17:40.249631  278961 host.go:66] Checking if "addons-141726" exists ...
	I1202 15:17:40.250003  278961 cli_runner.go:164] Run: docker container inspect addons-141726 --format={{.State.Status}}
	I1202 15:17:40.269566  278961 ssh_runner.go:195] Run: systemctl --version
	I1202 15:17:40.269628  278961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-141726
	I1202 15:17:40.290222  278961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/addons-141726/id_rsa Username:docker}
	I1202 15:17:40.390255  278961 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 15:17:40.390345  278961 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 15:17:40.422022  278961 cri.go:89] found id: "5412cbcb9dad23c931f90a92d80b1e256500b274b20fcff6807ec93ce486087b"
	I1202 15:17:40.422064  278961 cri.go:89] found id: "584adcb3687a901fe5060be0a1fb1600c34509b47bb4484486d6cd5d48c6ffad"
	I1202 15:17:40.422071  278961 cri.go:89] found id: "00fe2f1035e365363b059dcce9f3e0b81a3ad886b8c75acee254b56191e6e863"
	I1202 15:17:40.422078  278961 cri.go:89] found id: "c829c27b2be0ca5be71ec02d7cf4e0c49251f9e60910a0e3454bbcddca4fafcd"
	I1202 15:17:40.422082  278961 cri.go:89] found id: "3307ae3898cde405eec22b3674ace2b05333d6bbcffefc76a3d7305a9043c2e4"
	I1202 15:17:40.422089  278961 cri.go:89] found id: "874bcd460b4cc02e9dad7f167a328196ff5beb9a1012aaf8f5f4f2a393d906b8"
	I1202 15:17:40.422098  278961 cri.go:89] found id: "a087fdb2d51ae5fccdb5086d81846803a6e8c037c736644f24867541b19508e7"
	I1202 15:17:40.422102  278961 cri.go:89] found id: "44579fbc5bf765a529aaedb60d661600810b558ed14b53a2791275b051e41cea"
	I1202 15:17:40.422105  278961 cri.go:89] found id: "398f34ffd447f6b12322f761f0de9ff3970accc3446665dd1322ad321ade5e55"
	I1202 15:17:40.422118  278961 cri.go:89] found id: "51d82f122a46867d50393b13527c7d9616831cbadd2d989162e47bf6ff9995bf"
	I1202 15:17:40.422124  278961 cri.go:89] found id: "1433ec009789d9dece3a29309283858693fe9891c2c0deadf76da3ccde6e4d3a"
	I1202 15:17:40.422127  278961 cri.go:89] found id: "f7167c650b5e128373caeda86412c6686ea43f9fca94eebbea1e6330f1681df7"
	I1202 15:17:40.422130  278961 cri.go:89] found id: "ba208fa7fba7a8aa77172696fd226f8981ccd6cb050eebd1eaf36ffd634ae40b"
	I1202 15:17:40.422133  278961 cri.go:89] found id: "515f2711c25082029bfb73f256ab315837df06ad3ce2e28c43e6ca4f915ff98a"
	I1202 15:17:40.422136  278961 cri.go:89] found id: "db056cf136978dcdc941d292176e5dd0ae726b09d2af66bd9cfe26cd49867515"
	I1202 15:17:40.422147  278961 cri.go:89] found id: "8aef252bd37ce68223492a3d106cdc49f9000fcca914af4aed5855230552d3cf"
	I1202 15:17:40.422154  278961 cri.go:89] found id: "ed6c258d3fc965340e6765fb91d648d86cbd0ec27ffbf12d3f7f75dc84c42fe2"
	I1202 15:17:40.422199  278961 cri.go:89] found id: "62cac40636a5b133526a2d722e6709f00017736dd1cfc7e3133b26af70363e46"
	I1202 15:17:40.422212  278961 cri.go:89] found id: "0d87471c625dcf0f8df21886299158a0c0d136ac58fb47ca1dfde4ddef6434ad"
	I1202 15:17:40.422217  278961 cri.go:89] found id: "ad0c6b77e41e3a16f35df40501b8e69476519899dc85d64c8b2cf07c30b31ce4"
	I1202 15:17:40.422225  278961 cri.go:89] found id: "8ae5e65fa7abba6b7bd24be1ea23cf338ad905d35d5835b7da18a1374d635911"
	I1202 15:17:40.422233  278961 cri.go:89] found id: "d4ee4d2470fd104c16bcc7fe722d5ffc59c1d0a056ffed4d4587e05f1855bf93"
	I1202 15:17:40.422238  278961 cri.go:89] found id: "762c736ec2bae28c970f9e38d7f5c0753e1d54378a7fa586b85636f18b0e547e"
	I1202 15:17:40.422247  278961 cri.go:89] found id: "2ad3385ae6c4074333e7cd6e406cceedb26093169a52ca39f8c4e0168ed2a9eb"
	I1202 15:17:40.422252  278961 cri.go:89] found id: ""
	I1202 15:17:40.422311  278961 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 15:17:40.437889  278961 out.go:203] 
	W1202 15:17:40.439520  278961 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T15:17:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T15:17:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 15:17:40.439546  278961 out.go:285] * 
	* 
	W1202 15:17:40.442658  278961 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 15:17:40.444323  278961 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-141726 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (5.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-298630 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-298630 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-7p7xj" [70101e79-be4a-41e4-8ef8-bef1edd42621] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-298630 -n functional-298630
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-12-02 15:33:15.266712164 +0000 UTC m=+1082.133240862
functional_test.go:1645: (dbg) Run:  kubectl --context functional-298630 describe po hello-node-connect-7d85dfc575-7p7xj -n default
functional_test.go:1645: (dbg) kubectl --context functional-298630 describe po hello-node-connect-7d85dfc575-7p7xj -n default:
Name:             hello-node-connect-7d85dfc575-7p7xj
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-298630/192.168.49.2
Start Time:       Tue, 02 Dec 2025 15:23:14 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lw5tv (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-lw5tv:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-7p7xj to functional-298630
Normal   Pulling    7m7s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m7s (x5 over 9m59s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m7s (x5 over 9m59s)    kubelet            Error: ErrImagePull
Warning  Failed     4m57s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m44s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-298630 logs hello-node-connect-7d85dfc575-7p7xj -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-298630 logs hello-node-connect-7d85dfc575-7p7xj -n default: exit status 1 (68.528822ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-7p7xj" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-298630 logs hello-node-connect-7d85dfc575-7p7xj -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-298630 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-7p7xj
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-298630/192.168.49.2
Start Time:       Tue, 02 Dec 2025 15:23:14 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lw5tv (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-lw5tv:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-7p7xj to functional-298630
Normal   Pulling    7m7s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m7s (x5 over 9m59s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m7s (x5 over 9m59s)    kubelet            Error: ErrImagePull
Warning  Failed     4m57s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m44s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-298630 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-298630 logs -l app=hello-node-connect: exit status 1 (68.3015ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-7p7xj" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-298630 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-298630 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.111.52.121
IPs:                      10.111.52.121
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32304/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-298630
helpers_test.go:243: (dbg) docker inspect functional-298630:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "22d05dcdb0520ac10bad7ddf4ce6eaa0d76070952189fb5e99dc8390be8da836",
	        "Created": "2025-12-02T15:21:26.659820531Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 292516,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T15:21:26.692958298Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/22d05dcdb0520ac10bad7ddf4ce6eaa0d76070952189fb5e99dc8390be8da836/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/22d05dcdb0520ac10bad7ddf4ce6eaa0d76070952189fb5e99dc8390be8da836/hostname",
	        "HostsPath": "/var/lib/docker/containers/22d05dcdb0520ac10bad7ddf4ce6eaa0d76070952189fb5e99dc8390be8da836/hosts",
	        "LogPath": "/var/lib/docker/containers/22d05dcdb0520ac10bad7ddf4ce6eaa0d76070952189fb5e99dc8390be8da836/22d05dcdb0520ac10bad7ddf4ce6eaa0d76070952189fb5e99dc8390be8da836-json.log",
	        "Name": "/functional-298630",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-298630:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-298630",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "22d05dcdb0520ac10bad7ddf4ce6eaa0d76070952189fb5e99dc8390be8da836",
	                "LowerDir": "/var/lib/docker/overlay2/5a2e6d37d93a514ec2cd6eeac904b841b9f83988410052748e4c0c05d4edc538-init/diff:/var/lib/docker/overlay2/ab98578cee54140c21ba2edb7c02601b9799fbaa027f05ce4daaae66d198c082/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5a2e6d37d93a514ec2cd6eeac904b841b9f83988410052748e4c0c05d4edc538/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5a2e6d37d93a514ec2cd6eeac904b841b9f83988410052748e4c0c05d4edc538/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5a2e6d37d93a514ec2cd6eeac904b841b9f83988410052748e4c0c05d4edc538/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-298630",
	                "Source": "/var/lib/docker/volumes/functional-298630/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-298630",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-298630",
	                "name.minikube.sigs.k8s.io": "functional-298630",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "e25f7210b6fd556e76b7de35a110b56b34afcc214f5f6acf1e77e52d71a84596",
	            "SandboxKey": "/var/run/docker/netns/e25f7210b6fd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32898"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32899"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32902"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32900"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32901"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-298630": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0ce3c0471d70455c4534ef42952fb8aebba92a1da6e498f85c11c078c103d698",
	                    "EndpointID": "3f8755694ef40acd064ac8366107010d812334db3148efad21278732db3628f0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "66:f1:62:ec:e5:c9",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-298630",
	                        "22d05dcdb052"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-298630 -n functional-298630
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-298630 logs -n 25: (1.313832555s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount          │ -p functional-298630 /tmp/TestFunctionalparallelMountCmdspecific-port3446947167/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-298630 │ jenkins │ v1.37.0 │ 02 Dec 25 15:23 UTC │                     │
	│ ssh            │ functional-298630 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-298630 │ jenkins │ v1.37.0 │ 02 Dec 25 15:23 UTC │ 02 Dec 25 15:23 UTC │
	│ ssh            │ functional-298630 ssh -- ls -la /mount-9p                                                                                         │ functional-298630 │ jenkins │ v1.37.0 │ 02 Dec 25 15:23 UTC │ 02 Dec 25 15:23 UTC │
	│ ssh            │ functional-298630 ssh sudo umount -f /mount-9p                                                                                    │ functional-298630 │ jenkins │ v1.37.0 │ 02 Dec 25 15:23 UTC │                     │
	│ mount          │ -p functional-298630 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1948369065/001:/mount1 --alsologtostderr -v=1                │ functional-298630 │ jenkins │ v1.37.0 │ 02 Dec 25 15:23 UTC │                     │
	│ ssh            │ functional-298630 ssh findmnt -T /mount1                                                                                          │ functional-298630 │ jenkins │ v1.37.0 │ 02 Dec 25 15:23 UTC │                     │
	│ mount          │ -p functional-298630 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1948369065/001:/mount2 --alsologtostderr -v=1                │ functional-298630 │ jenkins │ v1.37.0 │ 02 Dec 25 15:23 UTC │                     │
	│ mount          │ -p functional-298630 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1948369065/001:/mount3 --alsologtostderr -v=1                │ functional-298630 │ jenkins │ v1.37.0 │ 02 Dec 25 15:23 UTC │                     │
	│ ssh            │ functional-298630 ssh findmnt -T /mount1                                                                                          │ functional-298630 │ jenkins │ v1.37.0 │ 02 Dec 25 15:23 UTC │ 02 Dec 25 15:23 UTC │
	│ ssh            │ functional-298630 ssh findmnt -T /mount2                                                                                          │ functional-298630 │ jenkins │ v1.37.0 │ 02 Dec 25 15:23 UTC │ 02 Dec 25 15:23 UTC │
	│ ssh            │ functional-298630 ssh findmnt -T /mount3                                                                                          │ functional-298630 │ jenkins │ v1.37.0 │ 02 Dec 25 15:23 UTC │ 02 Dec 25 15:23 UTC │
	│ mount          │ -p functional-298630 --kill=true                                                                                                  │ functional-298630 │ jenkins │ v1.37.0 │ 02 Dec 25 15:23 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-298630 --alsologtostderr -v=1                                                                    │ functional-298630 │ jenkins │ v1.37.0 │ 02 Dec 25 15:23 UTC │ 02 Dec 25 15:23 UTC │
	│ image          │ functional-298630 image ls --format short --alsologtostderr                                                                       │ functional-298630 │ jenkins │ v1.37.0 │ 02 Dec 25 15:23 UTC │ 02 Dec 25 15:23 UTC │
	│ image          │ functional-298630 image ls --format yaml --alsologtostderr                                                                        │ functional-298630 │ jenkins │ v1.37.0 │ 02 Dec 25 15:23 UTC │ 02 Dec 25 15:23 UTC │
	│ image          │ functional-298630 image ls --format json --alsologtostderr                                                                        │ functional-298630 │ jenkins │ v1.37.0 │ 02 Dec 25 15:23 UTC │ 02 Dec 25 15:23 UTC │
	│ image          │ functional-298630 image ls --format table --alsologtostderr                                                                       │ functional-298630 │ jenkins │ v1.37.0 │ 02 Dec 25 15:23 UTC │ 02 Dec 25 15:23 UTC │
	│ ssh            │ functional-298630 ssh pgrep buildkitd                                                                                             │ functional-298630 │ jenkins │ v1.37.0 │ 02 Dec 25 15:23 UTC │                     │
	│ image          │ functional-298630 image build -t localhost/my-image:functional-298630 testdata/build --alsologtostderr                            │ functional-298630 │ jenkins │ v1.37.0 │ 02 Dec 25 15:23 UTC │ 02 Dec 25 15:23 UTC │
	│ update-context │ functional-298630 update-context --alsologtostderr -v=2                                                                           │ functional-298630 │ jenkins │ v1.37.0 │ 02 Dec 25 15:23 UTC │ 02 Dec 25 15:23 UTC │
	│ update-context │ functional-298630 update-context --alsologtostderr -v=2                                                                           │ functional-298630 │ jenkins │ v1.37.0 │ 02 Dec 25 15:23 UTC │ 02 Dec 25 15:23 UTC │
	│ update-context │ functional-298630 update-context --alsologtostderr -v=2                                                                           │ functional-298630 │ jenkins │ v1.37.0 │ 02 Dec 25 15:23 UTC │ 02 Dec 25 15:23 UTC │
	│ image          │ functional-298630 image ls                                                                                                        │ functional-298630 │ jenkins │ v1.37.0 │ 02 Dec 25 15:23 UTC │ 02 Dec 25 15:23 UTC │
	│ service        │ functional-298630 service list                                                                                                    │ functional-298630 │ jenkins │ v1.37.0 │ 02 Dec 25 15:33 UTC │ 02 Dec 25 15:33 UTC │
	│ service        │ functional-298630 service list -o json                                                                                            │ functional-298630 │ jenkins │ v1.37.0 │ 02 Dec 25 15:33 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 15:23:37
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 15:23:37.529700  305615 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:23:37.529792  305615 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:23:37.529797  305615 out.go:374] Setting ErrFile to fd 2...
	I1202 15:23:37.529801  305615 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:23:37.530135  305615 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 15:23:37.530643  305615 out.go:368] Setting JSON to false
	I1202 15:23:37.531728  305615 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7559,"bootTime":1764681459,"procs":240,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 15:23:37.531797  305615 start.go:143] virtualization: kvm guest
	I1202 15:23:37.533867  305615 out.go:179] * [functional-298630] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1202 15:23:37.534976  305615 notify.go:221] Checking for updates...
	I1202 15:23:37.535009  305615 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 15:23:37.536093  305615 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 15:23:37.537172  305615 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 15:23:37.538312  305615 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-264555/.minikube
	I1202 15:23:37.539556  305615 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 15:23:37.543961  305615 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 15:23:37.545459  305615 config.go:182] Loaded profile config "functional-298630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 15:23:37.546009  305615 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 15:23:37.570480  305615 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 15:23:37.570660  305615 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:23:37.637977  305615 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:56 SystemTime:2025-12-02 15:23:37.626838701 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:23:37.638079  305615 docker.go:319] overlay module found
	I1202 15:23:37.639932  305615 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1202 15:23:37.641110  305615 start.go:309] selected driver: docker
	I1202 15:23:37.641139  305615 start.go:927] validating driver "docker" against &{Name:functional-298630 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-298630 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 15:23:37.641219  305615 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 15:23:37.643151  305615 out.go:203] 
	W1202 15:23:37.644488  305615 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1202 15:23:37.645719  305615 out.go:203] 
	
	
	==> CRI-O <==
	Dec 02 15:23:52 functional-298630 crio[3578]: time="2025-12-02T15:23:52.452026368Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 15:23:52 functional-298630 crio[3578]: time="2025-12-02T15:23:52.452319102Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/f7767e68e3d7de8df3d20c9c3397af436a746fb4e26fb585c81fe2bb8ea49902/merged/etc/group: no such file or directory"
	Dec 02 15:23:52 functional-298630 crio[3578]: time="2025-12-02T15:23:52.452816632Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 15:23:52 functional-298630 crio[3578]: time="2025-12-02T15:23:52.519388352Z" level=info msg="Created container 703fda342dc3fcc3113f11acb21fe5e0b7383392fe2b6e3d8247a8f80c231c8e: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pj2bq/kubernetes-dashboard" id=1ff53ac1-dbf0-4304-8107-0448198d46e1 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 15:23:52 functional-298630 crio[3578]: time="2025-12-02T15:23:52.520142393Z" level=info msg="Starting container: 703fda342dc3fcc3113f11acb21fe5e0b7383392fe2b6e3d8247a8f80c231c8e" id=ae40740a-c8db-4760-9265-ff8f474cc7ab name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 15:23:52 functional-298630 crio[3578]: time="2025-12-02T15:23:52.522370559Z" level=info msg="Started container" PID=7417 containerID=703fda342dc3fcc3113f11acb21fe5e0b7383392fe2b6e3d8247a8f80c231c8e description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pj2bq/kubernetes-dashboard id=ae40740a-c8db-4760-9265-ff8f474cc7ab name=/runtime.v1.RuntimeService/StartContainer sandboxID=fa68d8daafb14b708424287410c7281bdd64cd7254baeeb585999e7d0bd67d58
	Dec 02 15:23:54 functional-298630 crio[3578]: time="2025-12-02T15:23:54.098529199Z" level=info msg="Pulled image: docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a" id=c9e92edb-a73b-434e-a98b-a34942737061 name=/runtime.v1.ImageService/PullImage
	Dec 02 15:23:54 functional-298630 crio[3578]: time="2025-12-02T15:23:54.099234408Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=05a10c58-6505-4e40-a820-d707d541ec05 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 15:23:54 functional-298630 crio[3578]: time="2025-12-02T15:23:54.100892214Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=d5785310-c88f-45d1-9d21-abd0c5aebd4b name=/runtime.v1.ImageService/ImageStatus
	Dec 02 15:23:54 functional-298630 crio[3578]: time="2025-12-02T15:23:54.104877766Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-btxcb/dashboard-metrics-scraper" id=1c8e773b-73ff-4d0b-aae1-22059f559e02 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 15:23:54 functional-298630 crio[3578]: time="2025-12-02T15:23:54.105014257Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 15:23:54 functional-298630 crio[3578]: time="2025-12-02T15:23:54.109263408Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 15:23:54 functional-298630 crio[3578]: time="2025-12-02T15:23:54.109490928Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/1d75c224842de305ea88bff38dc8d630fd64623ff580e09096d7e875db3a7da1/merged/etc/group: no such file or directory"
	Dec 02 15:23:54 functional-298630 crio[3578]: time="2025-12-02T15:23:54.109902059Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 15:23:54 functional-298630 crio[3578]: time="2025-12-02T15:23:54.140856263Z" level=info msg="Created container 69e4e2f6f443bba1ec2e70501d302bc8b0064a96d8f9e26f92925d8e3d0a1290: kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-btxcb/dashboard-metrics-scraper" id=1c8e773b-73ff-4d0b-aae1-22059f559e02 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 15:23:54 functional-298630 crio[3578]: time="2025-12-02T15:23:54.141582219Z" level=info msg="Starting container: 69e4e2f6f443bba1ec2e70501d302bc8b0064a96d8f9e26f92925d8e3d0a1290" id=c663ebd5-91d0-44cf-904f-0a57d169250a name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 15:23:54 functional-298630 crio[3578]: time="2025-12-02T15:23:54.143381838Z" level=info msg="Started container" PID=7527 containerID=69e4e2f6f443bba1ec2e70501d302bc8b0064a96d8f9e26f92925d8e3d0a1290 description=kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-btxcb/dashboard-metrics-scraper id=c663ebd5-91d0-44cf-904f-0a57d169250a name=/runtime.v1.RuntimeService/StartContainer sandboxID=9b1d64fb809f52813b0d3f6f7eed312817de14686aa72aabe0443631a2b62d48
	Dec 02 15:23:56 functional-298630 crio[3578]: time="2025-12-02T15:23:56.655843361Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=f2333fd6-065b-4485-8778-c4e7ab4b5786 name=/runtime.v1.ImageService/PullImage
	Dec 02 15:23:57 functional-298630 crio[3578]: time="2025-12-02T15:23:57.65707016Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=689b5858-b0a7-4eac-b184-e4e6f3d9e4bb name=/runtime.v1.ImageService/PullImage
	Dec 02 15:24:37 functional-298630 crio[3578]: time="2025-12-02T15:24:37.656814825Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=5ce4d6cb-1082-4b0e-984f-1fa12afe3e4d name=/runtime.v1.ImageService/PullImage
	Dec 02 15:24:39 functional-298630 crio[3578]: time="2025-12-02T15:24:39.656016857Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=7cc9b0ff-ce5d-4aa9-96eb-c8f332525bcd name=/runtime.v1.ImageService/PullImage
	Dec 02 15:26:01 functional-298630 crio[3578]: time="2025-12-02T15:26:01.656312246Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=1140418d-6746-425b-83c5-ded0215a0101 name=/runtime.v1.ImageService/PullImage
	Dec 02 15:26:08 functional-298630 crio[3578]: time="2025-12-02T15:26:08.65636598Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=edf71af9-0f9e-4eef-b5df-24f560e53102 name=/runtime.v1.ImageService/PullImage
	Dec 02 15:28:45 functional-298630 crio[3578]: time="2025-12-02T15:28:45.655862242Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=3ce3b169-2a28-4ec3-875d-b94525c35641 name=/runtime.v1.ImageService/PullImage
	Dec 02 15:28:56 functional-298630 crio[3578]: time="2025-12-02T15:28:56.656834567Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=077102d4-aa09-440f-ac26-29186336cc2e name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	69e4e2f6f443b       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   9b1d64fb809f5       dashboard-metrics-scraper-77bf4d6c4c-btxcb   kubernetes-dashboard
	703fda342dc3f       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         9 minutes ago       Running             kubernetes-dashboard        0                   fa68d8daafb14       kubernetes-dashboard-855c9754f9-pj2bq        kubernetes-dashboard
	87bf0f38c91c7       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  9 minutes ago       Running             mysql                       0                   06795ea88ad1a       mysql-5bb876957f-mz28z                       default
	661b24e9a47c8       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              9 minutes ago       Exited              mount-munger                0                   f1ca360699248       busybox-mount                                default
	8bddb8a92a7cb       docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541                  9 minutes ago       Running             myfrontend                  0                   bf02e346b42b2       sp-pod                                       default
	15dc768e063d2       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                  9 minutes ago       Running             nginx                       0                   48dc1356f56df       nginx-svc                                    default
	01d349b0b22ca       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                 10 minutes ago      Running             kube-apiserver              0                   ed8ae0f439595       kube-apiserver-functional-298630             kube-system
	6c0aa49f1fc64       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                 10 minutes ago      Running             kube-controller-manager     1                   5ff95f43c3fa7       kube-controller-manager-functional-298630    kube-system
	f6b52ba279bdd       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                 10 minutes ago      Running             kube-scheduler              1                   56275367f477e       kube-scheduler-functional-298630             kube-system
	9f65f9007e50b       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                 10 minutes ago      Running             etcd                        1                   c046756bc1fc7       etcd-functional-298630                       kube-system
	c80f1a843dd0a       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                 10 minutes ago      Running             kube-proxy                  1                   c2fe0115d785c       kube-proxy-9zpp5                             kube-system
	7c60aaa73d0be       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 10 minutes ago      Running             kindnet-cni                 1                   6d4489115b2b5       kindnet-vlh6m                                kube-system
	5f116779940b5       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 10 minutes ago      Running             coredns                     1                   6a7b103804ec6       coredns-66bc5c9577-7f6xn                     kube-system
	57359aed0f7e9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         1                   c97bb7fcbd460       storage-provisioner                          kube-system
	0e70339246be6       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     0                   6a7b103804ec6       coredns-66bc5c9577-7f6xn                     kube-system
	65ce775f6a280       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         0                   c97bb7fcbd460       storage-provisioner                          kube-system
	22a6ba34051ce       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Exited              kindnet-cni                 0                   6d4489115b2b5       kindnet-vlh6m                                kube-system
	63d689f00257e       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                 11 minutes ago      Exited              kube-proxy                  0                   c2fe0115d785c       kube-proxy-9zpp5                             kube-system
	527c6d7f6a7e0       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                 11 minutes ago      Exited              etcd                        0                   c046756bc1fc7       etcd-functional-298630                       kube-system
	bef09c2ffb383       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                 11 minutes ago      Exited              kube-scheduler              0                   56275367f477e       kube-scheduler-functional-298630             kube-system
	c05cde3d083da       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                 11 minutes ago      Exited              kube-controller-manager     0                   5ff95f43c3fa7       kube-controller-manager-functional-298630    kube-system
	
	
	==> coredns [0e70339246be642c874e8c46f2bb55ef5951d34e8e59513c65cd75e7d3cdbe8c] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52504 - 47052 "HINFO IN 4693538114805416320.8470973166645376700. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.071996687s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [5f116779940b52b8543fd55525eb935d168ed19d48cc13bf18233013c5e9809a] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53409 - 46309 "HINFO IN 8224038352061477013.5526210214916988856. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.043709907s
	
	
	==> describe nodes <==
	Name:               functional-298630
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-298630
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=functional-298630
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T15_21_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 15:21:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-298630
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 15:33:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 15:33:12 +0000   Tue, 02 Dec 2025 15:21:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 15:33:12 +0000   Tue, 02 Dec 2025 15:21:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 15:33:12 +0000   Tue, 02 Dec 2025 15:21:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 15:33:12 +0000   Tue, 02 Dec 2025 15:22:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-298630
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                a7aa59e2-3e9a-41be-b130-01f48482dd52
	  Boot ID:                    e00bac56-b076-4861-bc22-5d3b11269f73
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-7fh87                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-7p7xj           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-mz28z                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     9m38s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m47s
	  kube-system                 coredns-66bc5c9577-7f6xn                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-298630                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-vlh6m                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-298630              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-298630     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-9zpp5                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-298630              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-btxcb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m29s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-pj2bq         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node functional-298630 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node functional-298630 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                kubelet          Node functional-298630 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           11m                node-controller  Node functional-298630 event: Registered Node functional-298630 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-298630 status is now: NodeReady
	  Normal  NodeNotReady             10m                kubelet          Node functional-298630 status is now: NodeNotReady
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-298630 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-298630 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-298630 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-298630 event: Registered Node functional-298630 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 9b c8 59 55 e7 08 06
	[  +4.389247] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 07 ad 09 99 ea 08 06
	[Dec 2 15:17] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[  +1.025203] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[  +1.023929] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[Dec 2 15:18] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[  +1.023866] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[  +1.023913] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[  +2.047808] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[  +4.031697] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[  +8.511329] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[ +16.382712] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[Dec 2 15:19] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	
	
	==> etcd [527c6d7f6a7e0195c9c867087dad059e802a20da9247e306360fb2f505ad77c7] <==
	{"level":"warn","ts":"2025-12-02T15:21:39.398181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:39.404534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:39.411385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:39.429981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:39.436323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:39.442406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:39.490228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39580","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-02T15:22:28.655713Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-02T15:22:28.655801Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-298630","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-12-02T15:22:28.655920Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-02T15:22:35.657701Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-02T15:22:35.657816Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T15:22:35.657843Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-12-02T15:22:35.657879Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-02T15:22:35.657975Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-02T15:22:35.657976Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-02T15:22:35.658023Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-02T15:22:35.658032Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-02T15:22:35.658079Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-02T15:22:35.658090Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-02T15:22:35.658098Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T15:22:35.660180Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-12-02T15:22:35.660236Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T15:22:35.660261Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-12-02T15:22:35.660269Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-298630","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [9f65f9007e50b5d75fffdf256f2a2924356ef8cdf62ba7510ec132902afa5794] <==
	{"level":"warn","ts":"2025-12-02T15:22:49.121052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:22:49.131127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:22:49.139208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:22:49.145741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:22:49.152902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:22:49.160415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:22:49.167517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:22:49.173870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:22:49.180240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:22:49.187556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:22:49.194030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:22:49.199719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:22:49.214649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:22:49.220942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:22:49.228138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:22:49.271402Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42192","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-02T15:23:52.445374Z","caller":"traceutil/trace.go:172","msg":"trace[110473820] linearizableReadLoop","detail":"{readStateIndex:897; appliedIndex:897; }","duration":"112.393229ms","start":"2025-12-02T15:23:52.332956Z","end":"2025-12-02T15:23:52.445349Z","steps":["trace[110473820] 'read index received'  (duration: 112.385096ms)","trace[110473820] 'applied index is now lower than readState.Index'  (duration: 7.121µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T15:23:52.445567Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.404951ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-02T15:23:52.445601Z","caller":"traceutil/trace.go:172","msg":"trace[719915087] transaction","detail":"{read_only:false; response_revision:833; number_of_response:1; }","duration":"178.611792ms","start":"2025-12-02T15:23:52.266977Z","end":"2025-12-02T15:23:52.445589Z","steps":["trace[719915087] 'process raft request'  (duration: 178.416084ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-02T15:23:52.445595Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"112.623886ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-02T15:23:52.445679Z","caller":"traceutil/trace.go:172","msg":"trace[899770012] range","detail":"{range_begin:/registry/serviceaccounts; range_end:; response_count:0; response_revision:833; }","duration":"112.72094ms","start":"2025-12-02T15:23:52.332946Z","end":"2025-12-02T15:23:52.445667Z","steps":["trace[899770012] 'agreement among raft nodes before linearized reading'  (duration: 112.52251ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T15:23:52.445610Z","caller":"traceutil/trace.go:172","msg":"trace[825304971] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:833; }","duration":"111.473025ms","start":"2025-12-02T15:23:52.334130Z","end":"2025-12-02T15:23:52.445603Z","steps":["trace[825304971] 'agreement among raft nodes before linearized reading'  (duration: 111.386651ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T15:32:48.769768Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1133}
	{"level":"info","ts":"2025-12-02T15:32:48.789091Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1133,"took":"19.001778ms","hash":2016608098,"current-db-size-bytes":3485696,"current-db-size":"3.5 MB","current-db-size-in-use-bytes":1568768,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2025-12-02T15:32:48.789137Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2016608098,"revision":1133,"compact-revision":-1}
	
	
	==> kernel <==
	 15:33:16 up  2:15,  0 user,  load average: 0.11, 0.24, 0.71
	Linux functional-298630 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [22a6ba34051cec162b6b32acc5a9697fd52b2366f634fb37f992affb7820ba41] <==
	I1202 15:21:48.492325       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 15:21:48.492656       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1202 15:21:48.492782       1 main.go:148] setting mtu 1500 for CNI 
	I1202 15:21:48.492796       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 15:21:48.492816       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T15:21:48Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 15:21:48.693492       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 15:21:48.693544       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 15:21:48.693558       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 15:21:48.787126       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1202 15:21:49.115715       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 15:21:49.115746       1 metrics.go:72] Registering metrics
	I1202 15:21:49.115797       1 controller.go:711] "Syncing nftables rules"
	I1202 15:21:58.695065       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:21:58.695155       1 main.go:301] handling current node
	I1202 15:22:08.700555       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:22:08.700598       1 main.go:301] handling current node
	I1202 15:22:18.697548       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:22:18.697580       1 main.go:301] handling current node
	
	
	==> kindnet [7c60aaa73d0be69004e7c15b5e96822304d6ab96e54414ebb05704fb1db72c5a] <==
	I1202 15:31:09.971128       1 main.go:301] handling current node
	I1202 15:31:19.971786       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:31:19.971819       1 main.go:301] handling current node
	I1202 15:31:29.968542       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:31:29.968567       1 main.go:301] handling current node
	I1202 15:31:39.970453       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:31:39.970500       1 main.go:301] handling current node
	I1202 15:31:49.973362       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:31:49.973404       1 main.go:301] handling current node
	I1202 15:31:59.968387       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:31:59.968460       1 main.go:301] handling current node
	I1202 15:32:09.969026       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:32:09.969067       1 main.go:301] handling current node
	I1202 15:32:19.972505       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:32:19.972543       1 main.go:301] handling current node
	I1202 15:32:29.973480       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:32:29.973517       1 main.go:301] handling current node
	I1202 15:32:39.977446       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:32:39.977483       1 main.go:301] handling current node
	I1202 15:32:49.968327       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:32:49.968371       1 main.go:301] handling current node
	I1202 15:32:59.968328       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:32:59.968364       1 main.go:301] handling current node
	I1202 15:33:09.969507       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:33:09.969559       1 main.go:301] handling current node
	
	
	==> kube-apiserver [01d349b0b22cac8132632ce716807f09e161c2d19b350320fe2b2be991cea294] <==
	I1202 15:22:49.754800       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 15:22:50.627147       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1202 15:22:50.754274       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1202 15:22:50.934301       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1202 15:22:50.935674       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 15:22:50.941386       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 15:22:51.503925       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1202 15:22:51.596836       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1202 15:22:51.646274       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 15:22:51.651498       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 15:22:53.467716       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1202 15:23:08.805462       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.111.186.71"}
	I1202 15:23:12.963203       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.108.74.225"}
	I1202 15:23:14.001359       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.101.78.94"}
	I1202 15:23:14.909089       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.111.52.121"}
	E1202 15:23:28.906522       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:33506: use of closed network connection
	E1202 15:23:37.035186       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:34372: use of closed network connection
	I1202 15:23:38.051201       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.99.43.159"}
	I1202 15:23:47.489238       1 controller.go:667] quota admission added evaluator for: namespaces
	I1202 15:23:47.598503       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.183.94"}
	I1202 15:23:47.612620       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.1.209"}
	E1202 15:23:52.191706       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:51420: use of closed network connection
	E1202 15:23:53.689667       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:38390: use of closed network connection
	E1202 15:23:55.063919       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:38408: use of closed network connection
	I1202 15:32:49.662306       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [6c0aa49f1fc6427679e30c6546cfb49ae687aa6ad60d042b1e057f4623c079ec] <==
	I1202 15:22:53.060931       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1202 15:22:53.062154       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1202 15:22:53.062160       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1202 15:22:53.062190       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1202 15:22:53.062228       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1202 15:22:53.062235       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1202 15:22:53.062234       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1202 15:22:53.062268       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1202 15:22:53.063566       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1202 15:22:53.063613       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1202 15:22:53.064773       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1202 15:22:53.067095       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1202 15:22:53.069496       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 15:22:53.069512       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1202 15:22:53.069529       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1202 15:22:53.070686       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1202 15:22:53.072559       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1202 15:22:53.081004       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1202 15:23:47.540967       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:23:47.547337       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:23:47.549373       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:23:47.551945       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:23:47.553051       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:23:47.558686       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:23:47.560747       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [c05cde3d083dab26fafb88c64f3f7852a290fc32903b73b63b4b353f33bd6b5a] <==
	I1202 15:21:46.867769       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1202 15:21:46.868843       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1202 15:21:46.868871       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1202 15:21:46.868924       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1202 15:21:46.868954       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1202 15:21:46.868927       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1202 15:21:46.869012       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1202 15:21:46.869023       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1202 15:21:46.869123       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1202 15:21:46.869269       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1202 15:21:46.869283       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1202 15:21:46.869438       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1202 15:21:46.870213       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1202 15:21:46.871358       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1202 15:21:46.873179       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1202 15:21:46.873244       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1202 15:21:46.873322       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 15:21:46.873329       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1202 15:21:46.873336       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1202 15:21:46.873340       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1202 15:21:46.879665       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="functional-298630" podCIDRs=["10.244.0.0/24"]
	I1202 15:21:46.887826       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 15:22:01.819513       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1202 15:22:26.822624       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1202 15:22:41.824554       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [63d689f00257e6d927dcd2c9cc2043d79acb8ef76b690a6fbb8de0c2764a8633] <==
	I1202 15:21:48.368730       1 server_linux.go:53] "Using iptables proxy"
	I1202 15:21:48.430745       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 15:21:48.530875       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 15:21:48.530917       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1202 15:21:48.531013       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 15:21:48.552068       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 15:21:48.552141       1 server_linux.go:132] "Using iptables Proxier"
	I1202 15:21:48.557204       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 15:21:48.557531       1 server.go:527] "Version info" version="v1.34.2"
	I1202 15:21:48.557561       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 15:21:48.558709       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 15:21:48.558736       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 15:21:48.558779       1 config.go:200] "Starting service config controller"
	I1202 15:21:48.558794       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 15:21:48.558857       1 config.go:309] "Starting node config controller"
	I1202 15:21:48.558871       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 15:21:48.558924       1 config.go:106] "Starting endpoint slice config controller"
	I1202 15:21:48.558952       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 15:21:48.658980       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 15:21:48.659003       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 15:21:48.659041       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 15:21:48.659144       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [c80f1a843dd0a6fce08e0ae67c1dd080e3bb44a7fcbf23ca958ded15cbaf8cb4] <==
	I1202 15:22:29.610637       1 server_linux.go:53] "Using iptables proxy"
	I1202 15:22:29.675284       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 15:22:29.775463       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 15:22:29.775498       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1202 15:22:29.775592       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 15:22:29.795050       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 15:22:29.795113       1 server_linux.go:132] "Using iptables Proxier"
	I1202 15:22:29.800825       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 15:22:29.801149       1 server.go:527] "Version info" version="v1.34.2"
	I1202 15:22:29.801170       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 15:22:29.802449       1 config.go:200] "Starting service config controller"
	I1202 15:22:29.802474       1 config.go:106] "Starting endpoint slice config controller"
	I1202 15:22:29.802475       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 15:22:29.802489       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 15:22:29.802459       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 15:22:29.802537       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 15:22:29.802552       1 config.go:309] "Starting node config controller"
	I1202 15:22:29.802561       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 15:22:29.802569       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 15:22:29.902704       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 15:22:29.902843       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1202 15:22:29.902848       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [bef09c2ffb383e84d4f91ca1128efd8e09813cda8fcdfa41c0ab5a1cc5c75a75] <==
	E1202 15:21:39.892107       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1202 15:21:39.892112       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1202 15:21:39.892171       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1202 15:21:40.703199       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1202 15:21:40.766805       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1202 15:21:40.791559       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1202 15:21:40.800898       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1202 15:21:40.815260       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1202 15:21:40.816206       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1202 15:21:40.839924       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1202 15:21:40.871105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1202 15:21:40.898952       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1202 15:21:40.962080       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1202 15:21:40.983138       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1202 15:21:41.044626       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1202 15:21:41.071878       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1202 15:21:41.072578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1202 15:21:41.135323       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1202 15:21:43.088074       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 15:22:46.278197       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 15:22:46.278225       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1202 15:22:46.278263       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1202 15:22:46.278296       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1202 15:22:46.278323       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1202 15:22:46.278352       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f6b52ba279bdd210099e475411fe5b637fd36a0cd911882a158169400f9e87ef] <==
	I1202 15:22:48.400510       1 serving.go:386] Generated self-signed cert in-memory
	W1202 15:22:49.650800       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1202 15:22:49.650839       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1202 15:22:49.650852       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1202 15:22:49.650867       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1202 15:22:49.670583       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1202 15:22:49.670616       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 15:22:49.673608       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 15:22:49.673717       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 15:22:49.674295       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1202 15:22:49.674479       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1202 15:22:49.774191       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 02 15:30:36 functional-298630 kubelet[4277]: E1202 15:30:36.655232    4277 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-7fh87" podUID="65dd5c24-28ed-4bf6-93d1-60379864a35c"
	Dec 02 15:30:39 functional-298630 kubelet[4277]: E1202 15:30:39.656264    4277 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7p7xj" podUID="70101e79-be4a-41e4-8ef8-bef1edd42621"
	Dec 02 15:30:51 functional-298630 kubelet[4277]: E1202 15:30:51.655692    4277 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7p7xj" podUID="70101e79-be4a-41e4-8ef8-bef1edd42621"
	Dec 02 15:30:51 functional-298630 kubelet[4277]: E1202 15:30:51.655818    4277 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-7fh87" podUID="65dd5c24-28ed-4bf6-93d1-60379864a35c"
	Dec 02 15:31:04 functional-298630 kubelet[4277]: E1202 15:31:04.655249    4277 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7p7xj" podUID="70101e79-be4a-41e4-8ef8-bef1edd42621"
	Dec 02 15:31:06 functional-298630 kubelet[4277]: E1202 15:31:06.655324    4277 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-7fh87" podUID="65dd5c24-28ed-4bf6-93d1-60379864a35c"
	Dec 02 15:31:16 functional-298630 kubelet[4277]: E1202 15:31:16.655680    4277 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7p7xj" podUID="70101e79-be4a-41e4-8ef8-bef1edd42621"
	Dec 02 15:31:18 functional-298630 kubelet[4277]: E1202 15:31:18.655538    4277 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-7fh87" podUID="65dd5c24-28ed-4bf6-93d1-60379864a35c"
	Dec 02 15:31:31 functional-298630 kubelet[4277]: E1202 15:31:31.655576    4277 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7p7xj" podUID="70101e79-be4a-41e4-8ef8-bef1edd42621"
	Dec 02 15:31:32 functional-298630 kubelet[4277]: E1202 15:31:32.655290    4277 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-7fh87" podUID="65dd5c24-28ed-4bf6-93d1-60379864a35c"
	Dec 02 15:31:42 functional-298630 kubelet[4277]: E1202 15:31:42.655133    4277 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7p7xj" podUID="70101e79-be4a-41e4-8ef8-bef1edd42621"
	Dec 02 15:31:44 functional-298630 kubelet[4277]: E1202 15:31:44.655187    4277 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-7fh87" podUID="65dd5c24-28ed-4bf6-93d1-60379864a35c"
	Dec 02 15:31:57 functional-298630 kubelet[4277]: E1202 15:31:57.656444    4277 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7p7xj" podUID="70101e79-be4a-41e4-8ef8-bef1edd42621"
	Dec 02 15:31:59 functional-298630 kubelet[4277]: E1202 15:31:59.655943    4277 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-7fh87" podUID="65dd5c24-28ed-4bf6-93d1-60379864a35c"
	Dec 02 15:32:08 functional-298630 kubelet[4277]: E1202 15:32:08.655123    4277 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7p7xj" podUID="70101e79-be4a-41e4-8ef8-bef1edd42621"
	Dec 02 15:32:11 functional-298630 kubelet[4277]: E1202 15:32:11.655963    4277 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-7fh87" podUID="65dd5c24-28ed-4bf6-93d1-60379864a35c"
	Dec 02 15:32:21 functional-298630 kubelet[4277]: E1202 15:32:21.655987    4277 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7p7xj" podUID="70101e79-be4a-41e4-8ef8-bef1edd42621"
	Dec 02 15:32:25 functional-298630 kubelet[4277]: E1202 15:32:25.655530    4277 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-7fh87" podUID="65dd5c24-28ed-4bf6-93d1-60379864a35c"
	Dec 02 15:32:36 functional-298630 kubelet[4277]: E1202 15:32:36.655564    4277 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7p7xj" podUID="70101e79-be4a-41e4-8ef8-bef1edd42621"
	Dec 02 15:32:37 functional-298630 kubelet[4277]: E1202 15:32:37.656672    4277 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-7fh87" podUID="65dd5c24-28ed-4bf6-93d1-60379864a35c"
	Dec 02 15:32:48 functional-298630 kubelet[4277]: E1202 15:32:48.655182    4277 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-7fh87" podUID="65dd5c24-28ed-4bf6-93d1-60379864a35c"
	Dec 02 15:32:51 functional-298630 kubelet[4277]: E1202 15:32:51.656278    4277 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7p7xj" podUID="70101e79-be4a-41e4-8ef8-bef1edd42621"
	Dec 02 15:33:00 functional-298630 kubelet[4277]: E1202 15:33:00.655854    4277 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-7fh87" podUID="65dd5c24-28ed-4bf6-93d1-60379864a35c"
	Dec 02 15:33:04 functional-298630 kubelet[4277]: E1202 15:33:04.655846    4277 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7p7xj" podUID="70101e79-be4a-41e4-8ef8-bef1edd42621"
	Dec 02 15:33:14 functional-298630 kubelet[4277]: E1202 15:33:14.656002    4277 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-7fh87" podUID="65dd5c24-28ed-4bf6-93d1-60379864a35c"
	
	
	==> kubernetes-dashboard [703fda342dc3fcc3113f11acb21fe5e0b7383392fe2b6e3d8247a8f80c231c8e] <==
	2025/12/02 15:23:52 Starting overwatch
	2025/12/02 15:23:52 Using namespace: kubernetes-dashboard
	2025/12/02 15:23:52 Using in-cluster config to connect to apiserver
	2025/12/02 15:23:52 Using secret token for csrf signing
	2025/12/02 15:23:52 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/02 15:23:52 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/02 15:23:52 Successful initial request to the apiserver, version: v1.34.2
	2025/12/02 15:23:52 Generating JWE encryption key
	2025/12/02 15:23:52 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/02 15:23:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/02 15:23:52 Initializing JWE encryption key from synchronized object
	2025/12/02 15:23:52 Creating in-cluster Sidecar client
	2025/12/02 15:23:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/02 15:23:52 Serving insecurely on HTTP port: 9090
	2025/12/02 15:24:22 Successful request to sidecar
	
	
	==> storage-provisioner [57359aed0f7e9143c690a0fb035ee1d40dfc598de1bd586534fa41277e6a8ffd] <==
	W1202 15:32:51.654935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:32:53.658637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:32:53.662585       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:32:55.665711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:32:55.669807       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:32:57.673265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:32:57.678132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:32:59.681352       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:32:59.685877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:33:01.689131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:33:01.693086       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:33:03.696502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:33:03.700214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:33:05.703188       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:33:05.707123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:33:07.710193       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:33:07.719300       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:33:09.722524       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:33:09.726278       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:33:11.729483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:33:11.733590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:33:13.736540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:33:13.741792       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:33:15.745613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:33:15.750312       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [65ce775f6a280c142f44fedaf1f799e414004fb833ec6dc393f2641eeee6597f] <==
	W1202 15:22:03.522766       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:22:05.525781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:22:05.530137       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:22:07.533271       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:22:07.538271       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:22:09.541777       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:22:09.545611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:22:11.548854       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:22:11.554549       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:22:13.557481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:22:13.562817       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:22:15.565513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:22:15.569182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:22:17.572227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:22:17.577203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:22:19.580374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:22:19.588670       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:22:21.593545       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:22:21.598139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:22:23.600647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:22:23.604975       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:22:25.608809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:22:25.612867       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:22:27.616116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:22:27.620948       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-298630 -n functional-298630
helpers_test.go:269: (dbg) Run:  kubectl --context functional-298630 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-7fh87 hello-node-connect-7d85dfc575-7p7xj
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-298630 describe pod busybox-mount hello-node-75c85bcc94-7fh87 hello-node-connect-7d85dfc575-7p7xj
helpers_test.go:290: (dbg) kubectl --context functional-298630 describe pod busybox-mount hello-node-75c85bcc94-7fh87 hello-node-connect-7d85dfc575-7p7xj:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-298630/192.168.49.2
	Start Time:       Tue, 02 Dec 2025 15:23:37 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://661b24e9a47c8aab12ee117280abea64f501bff70c6550f992645b2ec90f5190
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Tue, 02 Dec 2025 15:23:39 +0000
	      Finished:     Tue, 02 Dec 2025 15:23:39 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wbfv6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-wbfv6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m40s  default-scheduler  Successfully assigned default/busybox-mount to functional-298630
	  Normal  Pulling    9m40s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m38s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.909s (1.909s including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m38s  kubelet            Created container: mount-munger
	  Normal  Started    9m38s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-7fh87
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-298630/192.168.49.2
	Start Time:       Tue, 02 Dec 2025 15:23:12 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q6z88 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-q6z88:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-7fh87 to functional-298630
	  Normal   Pulling    7m16s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m16s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m16s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    3s (x41 over 10m)    kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     3s (x41 over 10m)    kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-7p7xj
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-298630/192.168.49.2
	Start Time:       Tue, 02 Dec 2025 15:23:14 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lw5tv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lw5tv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  10m                 default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-7p7xj to functional-298630
	  Normal   Pulling    7m9s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m9s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m9s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    0s (x42 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     0s (x42 over 10m)   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-298630 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-298630 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-7fh87" [65dd5c24-28ed-4bf6-93d1-60379864a35c] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-298630 -n functional-298630
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-12-02 15:33:13.315481826 +0000 UTC m=+1080.182010534
functional_test.go:1460: (dbg) Run:  kubectl --context functional-298630 describe po hello-node-75c85bcc94-7fh87 -n default
functional_test.go:1460: (dbg) kubectl --context functional-298630 describe po hello-node-75c85bcc94-7fh87 -n default:
Name:             hello-node-75c85bcc94-7fh87
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-298630/192.168.49.2
Start Time:       Tue, 02 Dec 2025 15:23:12 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q6z88 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-q6z88:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-7fh87 to functional-298630
Normal   Pulling    7m12s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m12s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m12s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m53s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m39s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-298630 logs hello-node-75c85bcc94-7fh87 -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-298630 logs hello-node-75c85bcc94-7fh87 -n default: exit status 1 (71.712181ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-7fh87" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-298630 logs hello-node-75c85bcc94-7fh87 -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 image load --daemon kicbase/echo-server:functional-298630 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-298630" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 image load --daemon kicbase/echo-server:functional-298630 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-298630" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-298630
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 image load --daemon kicbase/echo-server:functional-298630 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-298630" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 image save kicbase/echo-server:functional-298630 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1202 15:23:30.238368  302951 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:23:30.238677  302951 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:23:30.238689  302951 out.go:374] Setting ErrFile to fd 2...
	I1202 15:23:30.238693  302951 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:23:30.238897  302951 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 15:23:30.239486  302951 config.go:182] Loaded profile config "functional-298630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 15:23:30.239608  302951 config.go:182] Loaded profile config "functional-298630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 15:23:30.240088  302951 cli_runner.go:164] Run: docker container inspect functional-298630 --format={{.State.Status}}
	I1202 15:23:30.258852  302951 ssh_runner.go:195] Run: systemctl --version
	I1202 15:23:30.258923  302951 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-298630
	I1202 15:23:30.278848  302951 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/functional-298630/id_rsa Username:docker}
	I1202 15:23:30.378243  302951 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1202 15:23:30.378328  302951 cache_images.go:255] Failed to load cached images for "functional-298630": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1202 15:23:30.378351  302951 cache_images.go:267] failed pushing to: functional-298630

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-298630
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 image save --daemon kicbase/echo-server:functional-298630 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-298630
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-298630: exit status 1 (18.290126ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-298630

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-298630

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-298630 service --namespace=default --https --url hello-node: exit status 115 (561.901042ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31819
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-298630 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-298630 service hello-node --url --format={{.IP}}: exit status 115 (581.404095ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-298630 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-298630 service hello-node --url: exit status 115 (545.063825ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31819
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-298630 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31819
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (603.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-310311 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-310311 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-9f67c86d4-dqf4x" [2e0c9293-647d-4f80-845f-a3f6b64c6522] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1645: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-310311 -n functional-310311
functional_test.go:1645: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-12-02 15:45:44.081820842 +0000 UTC m=+1830.948349648
functional_test.go:1645: (dbg) Run:  kubectl --context functional-310311 describe po hello-node-connect-9f67c86d4-dqf4x -n default
functional_test.go:1645: (dbg) kubectl --context functional-310311 describe po hello-node-connect-9f67c86d4-dqf4x -n default:
Name:             hello-node-connect-9f67c86d4-dqf4x
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-310311/192.168.49.2
Start Time:       Tue, 02 Dec 2025 15:35:43 +0000
Labels:           app=hello-node-connect
pod-template-hash=9f67c86d4
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lh8p5 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-lh8p5:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-dqf4x to functional-310311
Normal   Pulling    6m46s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m46s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m46s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m59s (x19 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m31s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-310311 logs hello-node-connect-9f67c86d4-dqf4x -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-310311 logs hello-node-connect-9f67c86d4-dqf4x -n default: exit status 1 (84.667085ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-9f67c86d4-dqf4x" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-310311 logs hello-node-connect-9f67c86d4-dqf4x -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-310311 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-9f67c86d4-dqf4x
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-310311/192.168.49.2
Start Time:       Tue, 02 Dec 2025 15:35:43 +0000
Labels:           app=hello-node-connect
pod-template-hash=9f67c86d4
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lh8p5 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-lh8p5:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-dqf4x to functional-310311
Normal   Pulling    6m46s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m46s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m46s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m59s (x19 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m31s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-310311 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-310311 logs -l app=hello-node-connect: exit status 1 (94.748525ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-9f67c86d4-dqf4x" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-310311 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-310311 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.108.21.216
IPs:                      10.108.21.216
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32210/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-310311
helpers_test.go:243: (dbg) docker inspect functional-310311:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e24a2004b0b316df0fbb54b21dbb42e58625526f219c3c95976308d993d72217",
	        "Created": "2025-12-02T15:33:25.860835954Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 314695,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T15:33:25.89294848Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/e24a2004b0b316df0fbb54b21dbb42e58625526f219c3c95976308d993d72217/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e24a2004b0b316df0fbb54b21dbb42e58625526f219c3c95976308d993d72217/hostname",
	        "HostsPath": "/var/lib/docker/containers/e24a2004b0b316df0fbb54b21dbb42e58625526f219c3c95976308d993d72217/hosts",
	        "LogPath": "/var/lib/docker/containers/e24a2004b0b316df0fbb54b21dbb42e58625526f219c3c95976308d993d72217/e24a2004b0b316df0fbb54b21dbb42e58625526f219c3c95976308d993d72217-json.log",
	        "Name": "/functional-310311",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-310311:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-310311",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e24a2004b0b316df0fbb54b21dbb42e58625526f219c3c95976308d993d72217",
	                "LowerDir": "/var/lib/docker/overlay2/e34f11919b83cd2b2633d1306212a3fd91b2418a7bf0a4c4dcb27766c0d5b890-init/diff:/var/lib/docker/overlay2/ab98578cee54140c21ba2edb7c02601b9799fbaa027f05ce4daaae66d198c082/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e34f11919b83cd2b2633d1306212a3fd91b2418a7bf0a4c4dcb27766c0d5b890/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e34f11919b83cd2b2633d1306212a3fd91b2418a7bf0a4c4dcb27766c0d5b890/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e34f11919b83cd2b2633d1306212a3fd91b2418a7bf0a4c4dcb27766c0d5b890/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-310311",
	                "Source": "/var/lib/docker/volumes/functional-310311/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-310311",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-310311",
	                "name.minikube.sigs.k8s.io": "functional-310311",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "0238a8e9962acd2d08293a44587cecfbdce68a58dd12207b1b322de1bb21ec81",
	            "SandboxKey": "/var/run/docker/netns/0238a8e9962a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32903"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32904"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32907"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32905"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32906"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-310311": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "60dc28cba221f9949e7b0a62cff614e35fc8780c376e345c536dad7845960067",
	                    "EndpointID": "3196f45b803a8b8050a6f3b3ad100bb0bf8db57689503d7a7abdc3c286847c95",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "06:bc:59:38:2b:39",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-310311",
	                        "e24a2004b0b3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-310311 -n functional-310311
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-310311 logs -n 25: (1.313253539s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                  ARGS                                                  │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-310311 ssh -n functional-310311 sudo cat /home/docker/cp-test.txt                           │ functional-310311 │ jenkins │ v1.37.0 │ 02 Dec 25 15:36 UTC │ 02 Dec 25 15:36 UTC │
	│ cp             │ functional-310311 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                              │ functional-310311 │ jenkins │ v1.37.0 │ 02 Dec 25 15:36 UTC │ 02 Dec 25 15:36 UTC │
	│ ssh            │ functional-310311 ssh -n functional-310311 sudo cat /tmp/does/not/exist/cp-test.txt                    │ functional-310311 │ jenkins │ v1.37.0 │ 02 Dec 25 15:36 UTC │ 02 Dec 25 15:36 UTC │
	│ tunnel         │ functional-310311 tunnel --alsologtostderr                                                             │ functional-310311 │ jenkins │ v1.37.0 │ 02 Dec 25 15:36 UTC │                     │
	│ tunnel         │ functional-310311 tunnel --alsologtostderr                                                             │ functional-310311 │ jenkins │ v1.37.0 │ 02 Dec 25 15:36 UTC │                     │
	│ tunnel         │ functional-310311 tunnel --alsologtostderr                                                             │ functional-310311 │ jenkins │ v1.37.0 │ 02 Dec 25 15:36 UTC │                     │
	│ addons         │ functional-310311 addons list                                                                          │ functional-310311 │ jenkins │ v1.37.0 │ 02 Dec 25 15:36 UTC │ 02 Dec 25 15:36 UTC │
	│ addons         │ functional-310311 addons list -o json                                                                  │ functional-310311 │ jenkins │ v1.37.0 │ 02 Dec 25 15:36 UTC │ 02 Dec 25 15:36 UTC │
	│ ssh            │ functional-310311 ssh sudo cat /etc/ssl/certs/268099.pem                                               │ functional-310311 │ jenkins │ v1.37.0 │ 02 Dec 25 15:36 UTC │ 02 Dec 25 15:36 UTC │
	│ ssh            │ functional-310311 ssh sudo cat /usr/share/ca-certificates/268099.pem                                   │ functional-310311 │ jenkins │ v1.37.0 │ 02 Dec 25 15:36 UTC │ 02 Dec 25 15:36 UTC │
	│ ssh            │ functional-310311 ssh sudo cat /etc/ssl/certs/51391683.0                                               │ functional-310311 │ jenkins │ v1.37.0 │ 02 Dec 25 15:36 UTC │ 02 Dec 25 15:36 UTC │
	│ ssh            │ functional-310311 ssh sudo cat /etc/ssl/certs/2680992.pem                                              │ functional-310311 │ jenkins │ v1.37.0 │ 02 Dec 25 15:36 UTC │ 02 Dec 25 15:36 UTC │
	│ ssh            │ functional-310311 ssh sudo cat /usr/share/ca-certificates/2680992.pem                                  │ functional-310311 │ jenkins │ v1.37.0 │ 02 Dec 25 15:36 UTC │ 02 Dec 25 15:36 UTC │
	│ ssh            │ functional-310311 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                               │ functional-310311 │ jenkins │ v1.37.0 │ 02 Dec 25 15:36 UTC │ 02 Dec 25 15:36 UTC │
	│ image          │ functional-310311 image ls --format short --alsologtostderr                                            │ functional-310311 │ jenkins │ v1.37.0 │ 02 Dec 25 15:36 UTC │ 02 Dec 25 15:36 UTC │
	│ ssh            │ functional-310311 ssh pgrep buildkitd                                                                  │ functional-310311 │ jenkins │ v1.37.0 │ 02 Dec 25 15:36 UTC │                     │
	│ image          │ functional-310311 image build -t localhost/my-image:functional-310311 testdata/build --alsologtostderr │ functional-310311 │ jenkins │ v1.37.0 │ 02 Dec 25 15:36 UTC │ 02 Dec 25 15:36 UTC │
	│ image          │ functional-310311 image ls --format yaml --alsologtostderr                                             │ functional-310311 │ jenkins │ v1.37.0 │ 02 Dec 25 15:36 UTC │ 02 Dec 25 15:36 UTC │
	│ image          │ functional-310311 image ls --format json --alsologtostderr                                             │ functional-310311 │ jenkins │ v1.37.0 │ 02 Dec 25 15:36 UTC │ 02 Dec 25 15:36 UTC │
	│ image          │ functional-310311 image ls --format table --alsologtostderr                                            │ functional-310311 │ jenkins │ v1.37.0 │ 02 Dec 25 15:36 UTC │ 02 Dec 25 15:36 UTC │
	│ update-context │ functional-310311 update-context --alsologtostderr -v=2                                                │ functional-310311 │ jenkins │ v1.37.0 │ 02 Dec 25 15:36 UTC │ 02 Dec 25 15:36 UTC │
	│ update-context │ functional-310311 update-context --alsologtostderr -v=2                                                │ functional-310311 │ jenkins │ v1.37.0 │ 02 Dec 25 15:36 UTC │ 02 Dec 25 15:36 UTC │
	│ update-context │ functional-310311 update-context --alsologtostderr -v=2                                                │ functional-310311 │ jenkins │ v1.37.0 │ 02 Dec 25 15:36 UTC │ 02 Dec 25 15:36 UTC │
	│ image          │ functional-310311 image ls                                                                             │ functional-310311 │ jenkins │ v1.37.0 │ 02 Dec 25 15:36 UTC │ 02 Dec 25 15:36 UTC │
	│ service        │ functional-310311 service list                                                                         │ functional-310311 │ jenkins │ v1.37.0 │ 02 Dec 25 15:45 UTC │                     │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 15:35:45
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 15:35:45.921967  325246 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:35:45.922074  325246 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:35:45.922082  325246 out.go:374] Setting ErrFile to fd 2...
	I1202 15:35:45.922086  325246 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:35:45.922399  325246 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 15:35:45.922828  325246 out.go:368] Setting JSON to false
	I1202 15:35:45.923755  325246 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8287,"bootTime":1764681459,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 15:35:45.923828  325246 start.go:143] virtualization: kvm guest
	I1202 15:35:45.925648  325246 out.go:179] * [functional-310311] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1202 15:35:45.926834  325246 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 15:35:45.926857  325246 notify.go:221] Checking for updates...
	I1202 15:35:45.929135  325246 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 15:35:45.930180  325246 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 15:35:45.931240  325246 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-264555/.minikube
	I1202 15:35:45.932331  325246 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 15:35:45.933357  325246 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 15:35:45.935205  325246 config.go:182] Loaded profile config "functional-310311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 15:35:45.935972  325246 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 15:35:45.962175  325246 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 15:35:45.962296  325246 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:35:46.033246  325246 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:56 SystemTime:2025-12-02 15:35:46.022321101 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:35:46.033347  325246 docker.go:319] overlay module found
	I1202 15:35:46.035171  325246 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1202 15:35:46.036392  325246 start.go:309] selected driver: docker
	I1202 15:35:46.036414  325246 start.go:927] validating driver "docker" against &{Name:functional-310311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-310311 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 15:35:46.036544  325246 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 15:35:46.038193  325246 out.go:203] 
	W1202 15:35:46.039333  325246 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1202 15:35:46.040569  325246 out.go:203] 
	
	
	==> CRI-O <==
	Dec 02 15:36:22 functional-310311 crio[4614]: time="2025-12-02T15:36:22.111293811Z" level=info msg="Pulling image: docker.io/nginx:latest" id=bf567d22-6d4b-4c57-8aa8-ce6c717f910d name=/runtime.v1.ImageService/PullImage
	Dec 02 15:36:22 functional-310311 crio[4614]: time="2025-12-02T15:36:22.112784134Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Dec 02 15:36:23 functional-310311 crio[4614]: time="2025-12-02T15:36:23.267251414Z" level=info msg="Pulled image: docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541" id=bf567d22-6d4b-4c57-8aa8-ce6c717f910d name=/runtime.v1.ImageService/PullImage
	Dec 02 15:36:23 functional-310311 crio[4614]: time="2025-12-02T15:36:23.268030476Z" level=info msg="Checking image status: docker.io/nginx:latest" id=31b8fd47-0001-4df8-8d43-667c9b9e21e3 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 15:36:23 functional-310311 crio[4614]: time="2025-12-02T15:36:23.269385321Z" level=info msg="Checking image status: docker.io/nginx" id=37c39a65-f810-4ff9-8085-7a3c365f874b name=/runtime.v1.ImageService/ImageStatus
	Dec 02 15:36:23 functional-310311 crio[4614]: time="2025-12-02T15:36:23.272126766Z" level=info msg="Creating container: default/sp-pod/myfrontend" id=ace0da90-ed54-43a2-911f-9ffc4b632e5a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 15:36:23 functional-310311 crio[4614]: time="2025-12-02T15:36:23.272251533Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 15:36:23 functional-310311 crio[4614]: time="2025-12-02T15:36:23.276628438Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 15:36:23 functional-310311 crio[4614]: time="2025-12-02T15:36:23.277144786Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 15:36:23 functional-310311 crio[4614]: time="2025-12-02T15:36:23.30718113Z" level=info msg="Created container 335394d7d7891b9d40bff57315001be6a2f50094b471af142c6ab99e015d18d2: default/sp-pod/myfrontend" id=ace0da90-ed54-43a2-911f-9ffc4b632e5a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 15:36:23 functional-310311 crio[4614]: time="2025-12-02T15:36:23.307831744Z" level=info msg="Starting container: 335394d7d7891b9d40bff57315001be6a2f50094b471af142c6ab99e015d18d2" id=2de61609-be42-4f3a-81cf-6193b6265072 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 15:36:23 functional-310311 crio[4614]: time="2025-12-02T15:36:23.309559047Z" level=info msg="Started container" PID=8509 containerID=335394d7d7891b9d40bff57315001be6a2f50094b471af142c6ab99e015d18d2 description=default/sp-pod/myfrontend id=2de61609-be42-4f3a-81cf-6193b6265072 name=/runtime.v1.RuntimeService/StartContainer sandboxID=be3030682ce182b3f024c111ece310cc568f0a5cbbb797fa179dc11e7eb3c3ef
	Dec 02 15:36:25 functional-310311 crio[4614]: time="2025-12-02T15:36:25.355293761Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=d2167e06-9170-46c6-be3b-b7cee31f7534 name=/runtime.v1.ImageService/PullImage
	Dec 02 15:36:34 functional-310311 crio[4614]: time="2025-12-02T15:36:34.355345303Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=400fc0b8-3020-4206-8f00-c4a9ae73566c name=/runtime.v1.ImageService/PullImage
	Dec 02 15:36:50 functional-310311 crio[4614]: time="2025-12-02T15:36:50.347046652Z" level=info msg="Stopping pod sandbox: 0d931fa708bdc1ca3ce4ccf438820949b1149a8af0a10813c9ac846d0e60deac" id=b4270f66-c392-4267-a0cb-6f0ca19a3dbe name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 02 15:36:50 functional-310311 crio[4614]: time="2025-12-02T15:36:50.347108954Z" level=info msg="Stopped pod sandbox (already stopped): 0d931fa708bdc1ca3ce4ccf438820949b1149a8af0a10813c9ac846d0e60deac" id=b4270f66-c392-4267-a0cb-6f0ca19a3dbe name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 02 15:36:50 functional-310311 crio[4614]: time="2025-12-02T15:36:50.347470937Z" level=info msg="Removing pod sandbox: 0d931fa708bdc1ca3ce4ccf438820949b1149a8af0a10813c9ac846d0e60deac" id=87acaf54-5c01-4d71-8d28-0c96d940a650 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 02 15:36:50 functional-310311 crio[4614]: time="2025-12-02T15:36:50.351048902Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 02 15:36:50 functional-310311 crio[4614]: time="2025-12-02T15:36:50.351125343Z" level=info msg="Removed pod sandbox: 0d931fa708bdc1ca3ce4ccf438820949b1149a8af0a10813c9ac846d0e60deac" id=87acaf54-5c01-4d71-8d28-0c96d940a650 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 02 15:37:16 functional-310311 crio[4614]: time="2025-12-02T15:37:16.354934878Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=8ee46828-ee9a-4952-881c-34986c261d9c name=/runtime.v1.ImageService/PullImage
	Dec 02 15:37:25 functional-310311 crio[4614]: time="2025-12-02T15:37:25.35558717Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=637b87f0-63c5-4a85-b59e-437d431fc30d name=/runtime.v1.ImageService/PullImage
	Dec 02 15:38:47 functional-310311 crio[4614]: time="2025-12-02T15:38:47.355754473Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=4822026a-2f5a-4ae5-a0b5-c03255a6f5ee name=/runtime.v1.ImageService/PullImage
	Dec 02 15:38:58 functional-310311 crio[4614]: time="2025-12-02T15:38:58.355022109Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=3e0ec07d-f743-498d-ba26-092dbe784cc4 name=/runtime.v1.ImageService/PullImage
	Dec 02 15:41:31 functional-310311 crio[4614]: time="2025-12-02T15:41:31.354694171Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=c785f356-1cde-4f7d-a97f-1da55057148f name=/runtime.v1.ImageService/PullImage
	Dec 02 15:41:49 functional-310311 crio[4614]: time="2025-12-02T15:41:49.355299647Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=c8fdac37-77db-4411-8c5c-c675cb261907 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	335394d7d7891       docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541                  9 minutes ago       Running             myfrontend                  0                   be3030682ce18       sp-pod                                       default
	1c4182c1499a6       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                  9 minutes ago       Running             nginx                       0                   45131950e9f21       nginx-svc                                    default
	a7654344fee1c       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  9 minutes ago       Running             mysql                       0                   d72e94bdb1e8f       mysql-844cf969f6-jcmq2                       default
	421b3247c7d10       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         9 minutes ago       Running             kubernetes-dashboard        0                   327ecc7f8ee50       kubernetes-dashboard-b84665fb8-2mh9p         kubernetes-dashboard
	b7cd08d8ebb23       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   61bbe42b29a5d       dashboard-metrics-scraper-5565989548-kxzjl   kubernetes-dashboard
	ed23f4b2c092f       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              9 minutes ago       Exited              mount-munger                0                   d5dad5b39c389       busybox-mount                                default
	7999b5b5f0613       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         2                   5250dc28dc3e0       storage-provisioner                          kube-system
	a166c801489b6       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                                 10 minutes ago      Running             kube-controller-manager     2                   754d98dd63163       kube-controller-manager-functional-310311    kube-system
	4ae9fc6339687       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                                 10 minutes ago      Running             kube-apiserver              0                   9268c2c0b0bd1       kube-apiserver-functional-310311             kube-system
	bf2293a94cfb6       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                 10 minutes ago      Running             etcd                        1                   f59988f793bca       etcd-functional-310311                       kube-system
	e0a07b9d123d9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         1                   5250dc28dc3e0       storage-provisioner                          kube-system
	1c6db75afb9a3       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                                 11 minutes ago      Running             kube-proxy                  1                   9b68757df1b6e       kube-proxy-jprbg                             kube-system
	c980e74fdcf46       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Running             kindnet-cni                 1                   27b97ed5c294b       kindnet-7hbt6                                kube-system
	d28c1f81aaf36       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                                 11 minutes ago      Exited              kube-controller-manager     1                   754d98dd63163       kube-controller-manager-functional-310311    kube-system
	51fb31dbd89e5       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                                 11 minutes ago      Running             kube-scheduler              1                   0441d9a5a0ed2       kube-scheduler-functional-310311             kube-system
	4196db3e30777       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                                 11 minutes ago      Running             coredns                     1                   c8e89f782aebf       coredns-7d764666f9-d97ns                     kube-system
	f74f87eec11fe       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                                 11 minutes ago      Exited              coredns                     0                   c8e89f782aebf       coredns-7d764666f9-d97ns                     kube-system
	d78ef35bb5d24       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11               11 minutes ago      Exited              kindnet-cni                 0                   27b97ed5c294b       kindnet-7hbt6                                kube-system
	3f4e0ca4203bb       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                                 11 minutes ago      Exited              kube-proxy                  0                   9b68757df1b6e       kube-proxy-jprbg                             kube-system
	2f356a6b83099       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                                 12 minutes ago      Exited              kube-scheduler              0                   0441d9a5a0ed2       kube-scheduler-functional-310311             kube-system
	a1257bd6eff15       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                 12 minutes ago      Exited              etcd                        0                   f59988f793bca       etcd-functional-310311                       kube-system
	
	
	==> coredns [4196db3e307778cd18bb5fe2686ecbab7293adae74da218c2a7df841aec33ba2] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:36078 - 34018 "HINFO IN 1937752254547307961.3088169377977784033. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.018691807s
	
	
	==> coredns [f74f87eec11fec93dbffc7e5c6d2f464a275db87b6e2200d29bd9a4700d0760a] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:60299 - 29919 "HINFO IN 1911771316528910639.6999383719533005823. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.024736811s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-310311
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-310311
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=functional-310311
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T15_33_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 15:33:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-310311
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 15:45:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 15:45:45 +0000   Tue, 02 Dec 2025 15:33:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 15:45:45 +0000   Tue, 02 Dec 2025 15:33:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 15:45:45 +0000   Tue, 02 Dec 2025 15:33:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 15:45:45 +0000   Tue, 02 Dec 2025 15:34:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-310311
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                1b56aa73-bd7c-4b7d-8d94-7a30ba914172
	  Boot ID:                    e00bac56-b076-4861-bc22-5d3b11269f73
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5758569b79-zz2l6                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-9f67c86d4-dqf4x            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-844cf969f6-jcmq2                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     9m49s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m29s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m24s
	  kube-system                 coredns-7d764666f9-d97ns                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-310311                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-7hbt6                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-310311              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-310311     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-jprbg                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-310311              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-5565989548-kxzjl    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m51s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-2mh9p          0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  11m   node-controller  Node functional-310311 event: Registered Node functional-310311 in Controller
	  Normal  RegisteredNode  10m   node-controller  Node functional-310311 event: Registered Node functional-310311 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 9b c8 59 55 e7 08 06
	[  +4.389247] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 07 ad 09 99 ea 08 06
	[Dec 2 15:17] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[  +1.025203] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[  +1.023929] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[Dec 2 15:18] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[  +1.023866] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[  +1.023913] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[  +2.047808] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[  +4.031697] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[  +8.511329] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[ +16.382712] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[Dec 2 15:19] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	
	
	==> etcd [a1257bd6eff15c1a93575563c8479254ae5857d643464355a9d462fda3dca466] <==
	{"level":"warn","ts":"2025-12-02T15:33:45.338057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:33:46.889305Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.919149ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/admin\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-12-02T15:33:46.889471Z","caller":"traceutil/trace.go:172","msg":"trace[147703231] range","detail":"{range_begin:/registry/clusterroles/admin; range_end:; response_count:0; response_revision:106; }","duration":"122.110342ms","start":"2025-12-02T15:33:46.767347Z","end":"2025-12-02T15:33:46.889457Z","steps":["trace[147703231] 'range keys from in-memory index tree'  (duration: 121.812041ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T15:33:47.018612Z","caller":"traceutil/trace.go:172","msg":"trace[301034709] transaction","detail":"{read_only:false; response_revision:107; number_of_response:1; }","duration":"123.731336ms","start":"2025-12-02T15:33:46.894863Z","end":"2025-12-02T15:33:47.018594Z","steps":["trace[301034709] 'process raft request'  (duration: 123.629634ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T15:33:47.171599Z","caller":"traceutil/trace.go:172","msg":"trace[1600313486] transaction","detail":"{read_only:false; response_revision:108; number_of_response:1; }","duration":"148.943743ms","start":"2025-12-02T15:33:47.022626Z","end":"2025-12-02T15:33:47.171570Z","steps":["trace[1600313486] 'process raft request'  (duration: 81.549076ms)","trace[1600313486] 'compare'  (duration: 67.265815ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-02T15:33:47.390025Z","caller":"traceutil/trace.go:172","msg":"trace[942291285] transaction","detail":"{read_only:false; response_revision:110; number_of_response:1; }","duration":"150.738626ms","start":"2025-12-02T15:33:47.239261Z","end":"2025-12-02T15:33:47.389999Z","steps":["trace[942291285] 'process raft request'  (duration: 62.128603ms)","trace[942291285] 'compare'  (duration: 88.486737ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-02T15:33:47.574239Z","caller":"traceutil/trace.go:172","msg":"trace[313786946] transaction","detail":"{read_only:false; response_revision:112; number_of_response:1; }","duration":"121.581655ms","start":"2025-12-02T15:33:47.452640Z","end":"2025-12-02T15:33:47.574222Z","steps":["trace[313786946] 'process raft request'  (duration: 58.619619ms)","trace[313786946] 'compare'  (duration: 62.857771ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-02T15:34:49.341698Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-02T15:34:49.341787Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-310311","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-12-02T15:34:49.341947Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-02T15:34:49.343525Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-02T15:34:49.343605Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T15:34:49.343629Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-12-02T15:34:49.343701Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-12-02T15:34:49.343736Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-02T15:34:49.343754Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-02T15:34:49.343701Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-02T15:34:49.343767Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T15:34:49.343739Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-02T15:34:49.343771Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-02T15:34:49.343787Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T15:34:49.345945Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-12-02T15:34:49.346027Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T15:34:49.346055Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-12-02T15:34:49.346081Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-310311","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [bf2293a94cfb64575f5508124776c9b8c000ec591c9f81be4edefcaf28ea0858] <==
	{"level":"warn","ts":"2025-12-02T15:35:11.493725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:35:11.501711Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:35:11.509629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:35:11.516906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:35:11.524563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:35:11.534551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:35:11.542142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:35:11.549946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:35:11.565512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:35:11.573107Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:35:11.579838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:35:11.587826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:35:11.595910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:35:11.603931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:35:11.611885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:35:11.631843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:35:11.638665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:35:11.646540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:35:11.654034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:35:11.701782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:36:07.891652Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"153.543712ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-02T15:36:07.891752Z","caller":"traceutil/trace.go:172","msg":"trace[1424164029] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:840; }","duration":"153.695404ms","start":"2025-12-02T15:36:07.738038Z","end":"2025-12-02T15:36:07.891733Z","steps":["trace[1424164029] 'range keys from in-memory index tree'  (duration: 153.47211ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T15:45:11.188948Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1182}
	{"level":"info","ts":"2025-12-02T15:45:11.210940Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1182,"took":"21.653005ms","hash":2827468148,"current-db-size-bytes":3571712,"current-db-size":"3.6 MB","current-db-size-in-use-bytes":1634304,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2025-12-02T15:45:11.211002Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2827468148,"revision":1182,"compact-revision":-1}
	
	
	==> kernel <==
	 15:45:45 up  2:28,  0 user,  load average: 0.10, 0.21, 0.50
	Linux functional-310311 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c980e74fdcf46fd54508be42d50365e6c4d6d065d3729e413c9f5fd3644ed608] <==
	I1202 15:43:39.786501       1 main.go:301] handling current node
	I1202 15:43:49.784196       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:43:49.784269       1 main.go:301] handling current node
	I1202 15:43:59.793328       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:43:59.793363       1 main.go:301] handling current node
	I1202 15:44:09.786517       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:44:09.786554       1 main.go:301] handling current node
	I1202 15:44:19.784941       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:44:19.784987       1 main.go:301] handling current node
	I1202 15:44:29.788315       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:44:29.788353       1 main.go:301] handling current node
	I1202 15:44:39.786472       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:44:39.786548       1 main.go:301] handling current node
	I1202 15:44:49.788797       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:44:49.788848       1 main.go:301] handling current node
	I1202 15:44:59.793026       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:44:59.793064       1 main.go:301] handling current node
	I1202 15:45:09.784762       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:45:09.784828       1 main.go:301] handling current node
	I1202 15:45:19.789137       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:45:19.789172       1 main.go:301] handling current node
	I1202 15:45:29.788823       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:45:29.788881       1 main.go:301] handling current node
	I1202 15:45:39.787465       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:45:39.787501       1 main.go:301] handling current node
	
	
	==> kindnet [d78ef35bb5d24b149c545043614dc8780674a55859fbc59d3f04bdd72d6daab3] <==
	I1202 15:33:57.516033       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 15:33:57.516347       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1202 15:33:57.516516       1 main.go:148] setting mtu 1500 for CNI 
	I1202 15:33:57.516533       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 15:33:57.516552       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T15:33:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 15:33:57.719442       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 15:33:57.719484       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 15:33:57.719500       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 15:33:57.719748       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1202 15:33:58.212664       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 15:33:58.212698       1 metrics.go:72] Registering metrics
	I1202 15:33:58.212799       1 controller.go:711] "Syncing nftables rules"
	I1202 15:34:07.720994       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:34:07.721051       1 main.go:301] handling current node
	I1202 15:34:17.722544       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:34:17.722589       1 main.go:301] handling current node
	I1202 15:34:27.723585       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:34:27.723636       1 main.go:301] handling current node
	I1202 15:34:37.723627       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:34:37.723663       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4ae9fc63396873f2fd9803c96ed63b25ec9fdddd27135040a8c37c231b9a4388] <==
	I1202 15:35:12.476292       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 15:35:12.476292       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 15:35:13.053954       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	W1202 15:35:13.259492       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1202 15:35:13.261054       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 15:35:13.265549       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 15:35:13.716066       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1202 15:35:13.807178       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1202 15:35:13.852096       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 15:35:13.857133       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 15:35:15.575923       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1202 15:35:39.446201       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.110.95.174"}
	I1202 15:35:43.729049       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.108.21.216"}
	I1202 15:35:44.065964       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.107.218.28"}
	I1202 15:35:54.420516       1 controller.go:667] quota admission added evaluator for: namespaces
	I1202 15:35:54.534155       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.53.7"}
	I1202 15:35:54.546538       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.124.176"}
	I1202 15:35:56.219865       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.108.102.209"}
	E1202 15:36:13.389105       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:40964: use of closed network connection
	E1202 15:36:14.134635       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:40992: use of closed network connection
	E1202 15:36:16.008295       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:41002: use of closed network connection
	I1202 15:36:16.705858       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.103.216.177"}
	E1202 15:36:20.837696       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:41064: use of closed network connection
	E1202 15:36:29.876565       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:35608: use of closed network connection
	I1202 15:45:12.070567       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [a166c801489b618e84e54fe16a81a35537fa2a8a9bfdd7ba6b1a4a642ac829ff] <==
	I1202 15:35:15.287035       1 shared_informer.go:377] "Caches are synced"
	I1202 15:35:15.287050       1 shared_informer.go:377] "Caches are synced"
	I1202 15:35:15.287063       1 shared_informer.go:377] "Caches are synced"
	I1202 15:35:15.287061       1 shared_informer.go:377] "Caches are synced"
	I1202 15:35:15.287083       1 shared_informer.go:377] "Caches are synced"
	I1202 15:35:15.287104       1 range_allocator.go:177] "Sending events to api server"
	I1202 15:35:15.287132       1 shared_informer.go:377] "Caches are synced"
	I1202 15:35:15.287148       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1202 15:35:15.287154       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 15:35:15.287160       1 shared_informer.go:377] "Caches are synced"
	I1202 15:35:15.287940       1 shared_informer.go:377] "Caches are synced"
	I1202 15:35:15.288128       1 shared_informer.go:377] "Caches are synced"
	I1202 15:35:15.287050       1 shared_informer.go:377] "Caches are synced"
	I1202 15:35:15.287036       1 shared_informer.go:377] "Caches are synced"
	I1202 15:35:15.288597       1 shared_informer.go:377] "Caches are synced"
	I1202 15:35:15.384367       1 shared_informer.go:377] "Caches are synced"
	I1202 15:35:15.386595       1 shared_informer.go:377] "Caches are synced"
	I1202 15:35:15.386663       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1202 15:35:15.386669       1 garbagecollector.go:169] "Proceeding to collect garbage"
	E1202 15:35:54.471371       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:35:54.476077       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:35:54.481577       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:35:54.481684       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:35:54.486019       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:35:54.490442       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [d28c1f81aaf365bcf5120df71c99434838ed0d3805101150743a36ed41077eef] <==
	I1202 15:34:39.638651       1 serving.go:386] Generated self-signed cert in-memory
	I1202 15:34:39.644720       1 controllermanager.go:189] "Starting" version="v1.35.0-beta.0"
	I1202 15:34:39.644741       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 15:34:39.646015       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1202 15:34:39.646032       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1202 15:34:39.646145       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1202 15:34:39.646245       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1202 15:35:10.611789       1 controllermanager.go:250] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:50766->192.168.49.2:8441: read: connection reset by peer"
	
	
	==> kube-proxy [1c6db75afb9a3bc4961673955b6f4eabef0c592070c1718c229e3f3e42506242] <==
	I1202 15:34:39.362610       1 server_linux.go:53] "Using iptables proxy"
	I1202 15:34:39.439563       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 15:35:34.340581       1 shared_informer.go:377] "Caches are synced"
	I1202 15:35:34.340619       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1202 15:35:34.340768       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 15:35:34.360004       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 15:35:34.360081       1 server_linux.go:136] "Using iptables Proxier"
	I1202 15:35:34.366202       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 15:35:34.366735       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1202 15:35:34.366759       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 15:35:34.368398       1 config.go:200] "Starting service config controller"
	I1202 15:35:34.368488       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 15:35:34.369038       1 config.go:106] "Starting endpoint slice config controller"
	I1202 15:35:34.369062       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 15:35:34.369085       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 15:35:34.369091       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 15:35:34.369364       1 config.go:309] "Starting node config controller"
	I1202 15:35:34.369394       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 15:35:34.369403       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 15:35:34.469326       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 15:35:34.469354       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1202 15:35:34.469331       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [3f4e0ca4203bbd6674dc48e017aa05c5a33194c854dc7ae584aeab4076a20760] <==
	I1202 15:33:55.107642       1 server_linux.go:53] "Using iptables proxy"
	I1202 15:33:55.181894       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 15:33:55.282265       1 shared_informer.go:377] "Caches are synced"
	I1202 15:33:55.282301       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1202 15:33:55.282409       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 15:33:55.300897       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 15:33:55.300953       1 server_linux.go:136] "Using iptables Proxier"
	I1202 15:33:55.306052       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 15:33:55.306349       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1202 15:33:55.306385       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 15:33:55.307556       1 config.go:200] "Starting service config controller"
	I1202 15:33:55.307581       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 15:33:55.307705       1 config.go:309] "Starting node config controller"
	I1202 15:33:55.307720       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 15:33:55.307727       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 15:33:55.307753       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 15:33:55.307759       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 15:33:55.307797       1 config.go:106] "Starting endpoint slice config controller"
	I1202 15:33:55.308313       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 15:33:55.407798       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 15:33:55.407810       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 15:33:55.409055       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2f356a6b83099912bdcc8bcfcb2010a40c5f4c52eed462b11fb8dd9e1a52865d] <==
	E1202 15:33:46.937932       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1202 15:33:46.939275       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1202 15:33:46.943594       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope"
	E1202 15:33:46.944490       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1202 15:33:46.973317       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope"
	E1202 15:33:46.974260       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1202 15:33:47.026736       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope"
	E1202 15:33:47.027696       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1202 15:33:47.059296       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope"
	E1202 15:33:47.060239       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1202 15:33:47.073728       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope"
	E1202 15:33:47.074878       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1202 15:33:47.193508       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope"
	E1202 15:33:47.194637       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1202 15:33:47.260566       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope"
	E1202 15:33:47.261533       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1202 15:33:47.275059       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope"
	E1202 15:33:47.276184       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1202 15:33:47.306847       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope"
	E1202 15:33:47.307904       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	I1202 15:33:49.945874       1 shared_informer.go:377] "Caches are synced"
	I1202 15:34:38.610076       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1202 15:34:38.610149       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1202 15:34:38.610215       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1202 15:34:38.610274       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [51fb31dbd89e5f13cd2170e709b9711a5030e47d16cf1c51956ad80e06d4897f] <==
	I1202 15:34:39.558968       1 serving.go:386] Generated self-signed cert in-memory
	W1202 15:34:39.561730       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.49.2:8441: connect: connection refused
	W1202 15:34:39.561763       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1202 15:34:39.561773       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1202 15:34:39.569761       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1202 15:34:39.569795       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 15:34:39.571757       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 15:34:39.571796       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 15:34:39.571880       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1202 15:34:39.573219       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1202 15:35:23.872938       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 02 15:43:55 functional-310311 kubelet[5185]: E1202 15:43:55.354542    5185 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-kxzjl" containerName="dashboard-metrics-scraper"
	Dec 02 15:43:56 functional-310311 kubelet[5185]: E1202 15:43:56.355483    5185 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-5758569b79-zz2l6" podUID="462cfcc0-1b48-4bdf-a3bf-4a43e23f1cff"
	Dec 02 15:44:03 functional-310311 kubelet[5185]: E1202 15:44:03.354888    5185 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-9f67c86d4-dqf4x" podUID="2e0c9293-647d-4f80-845f-a3f6b64c6522"
	Dec 02 15:44:11 functional-310311 kubelet[5185]: E1202 15:44:11.354844    5185 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-5758569b79-zz2l6" podUID="462cfcc0-1b48-4bdf-a3bf-4a43e23f1cff"
	Dec 02 15:44:15 functional-310311 kubelet[5185]: E1202 15:44:15.355151    5185 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-9f67c86d4-dqf4x" podUID="2e0c9293-647d-4f80-845f-a3f6b64c6522"
	Dec 02 15:44:25 functional-310311 kubelet[5185]: E1202 15:44:25.355372    5185 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-5758569b79-zz2l6" podUID="462cfcc0-1b48-4bdf-a3bf-4a43e23f1cff"
	Dec 02 15:44:27 functional-310311 kubelet[5185]: E1202 15:44:27.354999    5185 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-9f67c86d4-dqf4x" podUID="2e0c9293-647d-4f80-845f-a3f6b64c6522"
	Dec 02 15:44:32 functional-310311 kubelet[5185]: E1202 15:44:32.354078    5185 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-2mh9p" containerName="kubernetes-dashboard"
	Dec 02 15:44:37 functional-310311 kubelet[5185]: E1202 15:44:37.353869    5185 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-310311" containerName="etcd"
	Dec 02 15:44:40 functional-310311 kubelet[5185]: E1202 15:44:40.355754    5185 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-5758569b79-zz2l6" podUID="462cfcc0-1b48-4bdf-a3bf-4a43e23f1cff"
	Dec 02 15:44:41 functional-310311 kubelet[5185]: E1202 15:44:41.355109    5185 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-9f67c86d4-dqf4x" podUID="2e0c9293-647d-4f80-845f-a3f6b64c6522"
	Dec 02 15:44:51 functional-310311 kubelet[5185]: E1202 15:44:51.353929    5185 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-310311" containerName="kube-controller-manager"
	Dec 02 15:44:53 functional-310311 kubelet[5185]: E1202 15:44:53.354520    5185 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-9f67c86d4-dqf4x" podUID="2e0c9293-647d-4f80-845f-a3f6b64c6522"
	Dec 02 15:44:53 functional-310311 kubelet[5185]: E1202 15:44:53.354844    5185 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-5758569b79-zz2l6" podUID="462cfcc0-1b48-4bdf-a3bf-4a43e23f1cff"
	Dec 02 15:44:54 functional-310311 kubelet[5185]: E1202 15:44:54.354596    5185 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-310311" containerName="kube-scheduler"
	Dec 02 15:44:54 functional-310311 kubelet[5185]: E1202 15:44:54.354819    5185 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-d97ns" containerName="coredns"
	Dec 02 15:45:05 functional-310311 kubelet[5185]: E1202 15:45:05.354282    5185 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-310311" containerName="kube-apiserver"
	Dec 02 15:45:05 functional-310311 kubelet[5185]: E1202 15:45:05.354714    5185 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-9f67c86d4-dqf4x" podUID="2e0c9293-647d-4f80-845f-a3f6b64c6522"
	Dec 02 15:45:08 functional-310311 kubelet[5185]: E1202 15:45:08.355538    5185 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-5758569b79-zz2l6" podUID="462cfcc0-1b48-4bdf-a3bf-4a43e23f1cff"
	Dec 02 15:45:15 functional-310311 kubelet[5185]: E1202 15:45:15.354503    5185 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-kxzjl" containerName="dashboard-metrics-scraper"
	Dec 02 15:45:16 functional-310311 kubelet[5185]: E1202 15:45:16.354791    5185 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-9f67c86d4-dqf4x" podUID="2e0c9293-647d-4f80-845f-a3f6b64c6522"
	Dec 02 15:45:21 functional-310311 kubelet[5185]: E1202 15:45:21.355273    5185 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-5758569b79-zz2l6" podUID="462cfcc0-1b48-4bdf-a3bf-4a43e23f1cff"
	Dec 02 15:45:30 functional-310311 kubelet[5185]: E1202 15:45:30.355495    5185 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-9f67c86d4-dqf4x" podUID="2e0c9293-647d-4f80-845f-a3f6b64c6522"
	Dec 02 15:45:34 functional-310311 kubelet[5185]: E1202 15:45:34.355232    5185 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-5758569b79-zz2l6" podUID="462cfcc0-1b48-4bdf-a3bf-4a43e23f1cff"
	Dec 02 15:45:44 functional-310311 kubelet[5185]: E1202 15:45:44.355194    5185 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-9f67c86d4-dqf4x" podUID="2e0c9293-647d-4f80-845f-a3f6b64c6522"
	
	
	==> kubernetes-dashboard [421b3247c7d107996b4c4760cbbea0cf0cbb182cc3a387587c012b4c3642d321] <==
	2025/12/02 15:36:00 Starting overwatch
	2025/12/02 15:36:00 Using namespace: kubernetes-dashboard
	2025/12/02 15:36:00 Using in-cluster config to connect to apiserver
	2025/12/02 15:36:00 Using secret token for csrf signing
	2025/12/02 15:36:00 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/02 15:36:00 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/02 15:36:00 Successful initial request to the apiserver, version: v1.35.0-beta.0
	2025/12/02 15:36:00 Generating JWE encryption key
	2025/12/02 15:36:00 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/02 15:36:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/02 15:36:00 Initializing JWE encryption key from synchronized object
	2025/12/02 15:36:00 Creating in-cluster Sidecar client
	2025/12/02 15:36:00 Successful request to sidecar
	2025/12/02 15:36:00 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [7999b5b5f06138ee9dfb9589c334bcdf70f85712d9c85079f0eb02a5c272ab9f] <==
	W1202 15:45:20.486572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:45:22.489676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:45:22.494985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:45:24.498682       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:45:24.502705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:45:26.506407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:45:26.511487       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:45:28.514887       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:45:28.518500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:45:30.521764       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:45:30.526013       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:45:32.529232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:45:32.534755       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:45:34.537963       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:45:34.541583       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:45:36.545331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:45:36.550208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:45:38.553784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:45:38.557663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:45:40.561194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:45:40.565060       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:45:42.568540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:45:42.573462       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:45:44.578559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:45:44.582841       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e0a07b9d123d91d6445dc53749d477da4fa0adfa0216a4b96c63451d987f2ace] <==
	I1202 15:34:39.325219       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1202 15:34:39.326904       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-310311 -n functional-310311
helpers_test.go:269: (dbg) Run:  kubectl --context functional-310311 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-5758569b79-zz2l6 hello-node-connect-9f67c86d4-dqf4x
helpers_test.go:282: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-310311 describe pod busybox-mount hello-node-5758569b79-zz2l6 hello-node-connect-9f67c86d4-dqf4x
helpers_test.go:290: (dbg) kubectl --context functional-310311 describe pod busybox-mount hello-node-5758569b79-zz2l6 hello-node-connect-9f67c86d4-dqf4x:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-310311/192.168.49.2
	Start Time:       Tue, 02 Dec 2025 15:35:46 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  mount-munger:
	    Container ID:  cri-o://ed23f4b2c092fce35c08f98f21bade4616a0908e5a90e844c6fb192a7488cbea
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Tue, 02 Dec 2025 15:35:48 +0000
	      Finished:     Tue, 02 Dec 2025 15:35:48 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t8fs7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-t8fs7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  10m    default-scheduler  Successfully assigned default/busybox-mount to functional-310311
	  Normal  Pulling    10m    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m58s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.886s (1.887s including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m58s  kubelet            Container created
	  Normal  Started    9m58s  kubelet            Container started
	
	
	Name:             hello-node-5758569b79-zz2l6
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-310311/192.168.49.2
	Start Time:       Tue, 02 Dec 2025 15:35:43 +0000
	Labels:           app=hello-node
	                  pod-template-hash=5758569b79
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:           10.244.0.5
	Controlled By:  ReplicaSet/hello-node-5758569b79
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kfjz4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-kfjz4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-5758569b79-zz2l6 to functional-310311
	  Normal   Pulling    6m59s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m59s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m59s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m57s (x20 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m43s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-9f67c86d4-dqf4x
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-310311/192.168.49.2
	Start Time:       Tue, 02 Dec 2025 15:35:43 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=9f67c86d4
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lh8p5 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lh8p5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-dqf4x to functional-310311
	  Normal   Pulling    6m48s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m48s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m48s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    2s (x42 over 10m)    kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     2s (x42 over 10m)    kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (603.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (600.68s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-310311 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-310311 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-5758569b79-zz2l6" [462cfcc0-1b48-4bdf-a3bf-4a43e23f1cff] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-310311 -n functional-310311
functional_test.go:1460: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-12-02 15:45:44.441935264 +0000 UTC m=+1831.308463963
functional_test.go:1460: (dbg) Run:  kubectl --context functional-310311 describe po hello-node-5758569b79-zz2l6 -n default
functional_test.go:1460: (dbg) kubectl --context functional-310311 describe po hello-node-5758569b79-zz2l6 -n default:
Name:             hello-node-5758569b79-zz2l6
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-310311/192.168.49.2
Start Time:       Tue, 02 Dec 2025 15:35:43 +0000
Labels:           app=hello-node
pod-template-hash=5758569b79
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/hello-node-5758569b79
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kfjz4 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-kfjz4:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-5758569b79-zz2l6 to functional-310311
Normal   Pulling    6m57s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m57s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m57s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m55s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m41s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-310311 logs hello-node-5758569b79-zz2l6 -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-310311 logs hello-node-5758569b79-zz2l6 -n default: exit status 1 (73.093495ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-5758569b79-zz2l6" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-310311 logs hello-node-5758569b79-zz2l6 -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (600.68s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (0.92s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 image load --daemon kicbase/echo-server:functional-310311 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-310311" to be loaded into minikube but the image is not there
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (0.92s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 image load --daemon kicbase/echo-server:functional-310311 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-310311" to be loaded into minikube but the image is not there
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.8s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-310311
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 image load --daemon kicbase/echo-server:functional-310311 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-310311" to be loaded into minikube but the image is not there
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.80s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 image save kicbase/echo-server:functional-310311 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1202 15:35:52.962887  327239 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:35:52.963008  327239 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:35:52.963015  327239 out.go:374] Setting ErrFile to fd 2...
	I1202 15:35:52.963021  327239 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:35:52.963306  327239 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 15:35:52.964194  327239 config.go:182] Loaded profile config "functional-310311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 15:35:52.964351  327239 config.go:182] Loaded profile config "functional-310311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 15:35:52.965005  327239 cli_runner.go:164] Run: docker container inspect functional-310311 --format={{.State.Status}}
	I1202 15:35:52.986346  327239 ssh_runner.go:195] Run: systemctl --version
	I1202 15:35:52.986413  327239 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-310311
	I1202 15:35:53.004294  327239 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/functional-310311/id_rsa Username:docker}
	I1202 15:35:53.103287  327239 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1202 15:35:53.103374  327239 cache_images.go:255] Failed to load cached images for "functional-310311": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1202 15:35:53.103401  327239 cache_images.go:267] failed pushing to: functional-310311

                                                
                                                
** /stderr **
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-310311
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 image save --daemon kicbase/echo-server:functional-310311 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-310311
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-310311: exit status 1 (18.658033ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-310311

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-310311

                                                
                                                
** /stderr **
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-310311 service --namespace=default --https --url hello-node: exit status 115 (546.106291ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31087
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-310311 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-310311 service hello-node --url --format={{.IP}}: exit status 115 (548.033566ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-310311 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-310311 service hello-node --url: exit status 115 (552.01979ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31087
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-310311 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31087
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.55s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-448937 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-448937 --output=json --user=testUser: exit status 80 (2.077206998s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d1e96ef3-25b8-4c2c-b1fa-79c1ca277d9d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-448937 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"269a27d9-48b1-4e3f-ad53-c929becaf761","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-02T15:54:40Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"b1cf1655-85ce-40cf-8ef3-634ea091d8f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-448937 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (2.08s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-448937 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-448937 --output=json --user=testUser: exit status 80 (2.076350196s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b3c5b57c-1109-4581-9e77-b2fd05574258","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-448937 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"bd0aecb9-7eb9-4bda-8419-50eb276c7a68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-02T15:54:42Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"48a7ca94-8088-49da-ac8b-f88af04b8ac1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-448937 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (2.08s)

                                                
                                    
x
+
TestPause/serial/Pause (6.08s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-907557 --alsologtostderr -v=5
E1202 16:08:12.971561  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-298630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-907557 --alsologtostderr -v=5: exit status 80 (1.870957882s)

                                                
                                                
-- stdout --
	* Pausing node pause-907557 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 16:08:11.166993  478262 out.go:360] Setting OutFile to fd 1 ...
	I1202 16:08:11.167095  478262 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:08:11.167103  478262 out.go:374] Setting ErrFile to fd 2...
	I1202 16:08:11.167107  478262 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:08:11.167330  478262 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 16:08:11.167618  478262 out.go:368] Setting JSON to false
	I1202 16:08:11.167640  478262 mustload.go:66] Loading cluster: pause-907557
	I1202 16:08:11.168038  478262 config.go:182] Loaded profile config "pause-907557": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 16:08:11.168443  478262 cli_runner.go:164] Run: docker container inspect pause-907557 --format={{.State.Status}}
	I1202 16:08:11.187584  478262 host.go:66] Checking if "pause-907557" exists ...
	I1202 16:08:11.187919  478262 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 16:08:11.252068  478262 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:83 OomKillDisable:false NGoroutines:88 SystemTime:2025-12-02 16:08:11.241332525 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 16:08:11.252748  478262 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-907557 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1202 16:08:11.254746  478262 out.go:179] * Pausing node pause-907557 ... 
	I1202 16:08:11.255755  478262 host.go:66] Checking if "pause-907557" exists ...
	I1202 16:08:11.256031  478262 ssh_runner.go:195] Run: systemctl --version
	I1202 16:08:11.256094  478262 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-907557
	I1202 16:08:11.277401  478262 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/pause-907557/id_rsa Username:docker}
	I1202 16:08:11.379305  478262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 16:08:11.392391  478262 pause.go:52] kubelet running: true
	I1202 16:08:11.392477  478262 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 16:08:11.522498  478262 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 16:08:11.522588  478262 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 16:08:11.592012  478262 cri.go:89] found id: "b2836aceeb8807e0993320e05f6aa6c4be7c30aaaa190092f8e98f5f7dd646ec"
	I1202 16:08:11.592039  478262 cri.go:89] found id: "34828dad597db079c97a036969df0740139e6fd38885ad5627968129aef7c2b3"
	I1202 16:08:11.592065  478262 cri.go:89] found id: "586f014c53211c1af9d8288055382380c3d51998056d288238f813c46118b641"
	I1202 16:08:11.592069  478262 cri.go:89] found id: "1ac7ddf9843eebd770bec15da5164025aa9877f89ae53a56ffdd6e14a093fe56"
	I1202 16:08:11.592071  478262 cri.go:89] found id: "7cc002479c3d20848066c689b18ebdf1db75e87f1c451b1526e550789e7a63fa"
	I1202 16:08:11.592075  478262 cri.go:89] found id: "cdfe7eda529156977893291247b97065289958fe65cbac19931af954d1f7e904"
	I1202 16:08:11.592080  478262 cri.go:89] found id: "132312565fa9df9459ca2fab422a4a035d2dd56ac519dec4d9ca9c4397bc628b"
	I1202 16:08:11.592084  478262 cri.go:89] found id: ""
	I1202 16:08:11.592134  478262 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 16:08:11.604184  478262 retry.go:31] will retry after 154.066385ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T16:08:11Z" level=error msg="open /run/runc: no such file or directory"
	I1202 16:08:11.758575  478262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 16:08:11.773550  478262 pause.go:52] kubelet running: false
	I1202 16:08:11.773619  478262 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 16:08:11.893514  478262 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 16:08:11.893624  478262 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 16:08:11.964173  478262 cri.go:89] found id: "b2836aceeb8807e0993320e05f6aa6c4be7c30aaaa190092f8e98f5f7dd646ec"
	I1202 16:08:11.964202  478262 cri.go:89] found id: "34828dad597db079c97a036969df0740139e6fd38885ad5627968129aef7c2b3"
	I1202 16:08:11.964208  478262 cri.go:89] found id: "586f014c53211c1af9d8288055382380c3d51998056d288238f813c46118b641"
	I1202 16:08:11.964214  478262 cri.go:89] found id: "1ac7ddf9843eebd770bec15da5164025aa9877f89ae53a56ffdd6e14a093fe56"
	I1202 16:08:11.964218  478262 cri.go:89] found id: "7cc002479c3d20848066c689b18ebdf1db75e87f1c451b1526e550789e7a63fa"
	I1202 16:08:11.964223  478262 cri.go:89] found id: "cdfe7eda529156977893291247b97065289958fe65cbac19931af954d1f7e904"
	I1202 16:08:11.964228  478262 cri.go:89] found id: "132312565fa9df9459ca2fab422a4a035d2dd56ac519dec4d9ca9c4397bc628b"
	I1202 16:08:11.964232  478262 cri.go:89] found id: ""
	I1202 16:08:11.964273  478262 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 16:08:11.977881  478262 retry.go:31] will retry after 191.282054ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T16:08:11Z" level=error msg="open /run/runc: no such file or directory"
	I1202 16:08:12.170361  478262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 16:08:12.183619  478262 pause.go:52] kubelet running: false
	I1202 16:08:12.183691  478262 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 16:08:12.306389  478262 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 16:08:12.306516  478262 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 16:08:12.376150  478262 cri.go:89] found id: "b2836aceeb8807e0993320e05f6aa6c4be7c30aaaa190092f8e98f5f7dd646ec"
	I1202 16:08:12.376172  478262 cri.go:89] found id: "34828dad597db079c97a036969df0740139e6fd38885ad5627968129aef7c2b3"
	I1202 16:08:12.376177  478262 cri.go:89] found id: "586f014c53211c1af9d8288055382380c3d51998056d288238f813c46118b641"
	I1202 16:08:12.376180  478262 cri.go:89] found id: "1ac7ddf9843eebd770bec15da5164025aa9877f89ae53a56ffdd6e14a093fe56"
	I1202 16:08:12.376182  478262 cri.go:89] found id: "7cc002479c3d20848066c689b18ebdf1db75e87f1c451b1526e550789e7a63fa"
	I1202 16:08:12.376185  478262 cri.go:89] found id: "cdfe7eda529156977893291247b97065289958fe65cbac19931af954d1f7e904"
	I1202 16:08:12.376188  478262 cri.go:89] found id: "132312565fa9df9459ca2fab422a4a035d2dd56ac519dec4d9ca9c4397bc628b"
	I1202 16:08:12.376190  478262 cri.go:89] found id: ""
	I1202 16:08:12.376234  478262 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 16:08:12.388551  478262 retry.go:31] will retry after 356.403726ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T16:08:12Z" level=error msg="open /run/runc: no such file or directory"
	I1202 16:08:12.745158  478262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 16:08:12.759006  478262 pause.go:52] kubelet running: false
	I1202 16:08:12.759076  478262 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 16:08:12.875710  478262 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 16:08:12.875810  478262 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 16:08:12.943951  478262 cri.go:89] found id: "b2836aceeb8807e0993320e05f6aa6c4be7c30aaaa190092f8e98f5f7dd646ec"
	I1202 16:08:12.943975  478262 cri.go:89] found id: "34828dad597db079c97a036969df0740139e6fd38885ad5627968129aef7c2b3"
	I1202 16:08:12.943979  478262 cri.go:89] found id: "586f014c53211c1af9d8288055382380c3d51998056d288238f813c46118b641"
	I1202 16:08:12.943982  478262 cri.go:89] found id: "1ac7ddf9843eebd770bec15da5164025aa9877f89ae53a56ffdd6e14a093fe56"
	I1202 16:08:12.943985  478262 cri.go:89] found id: "7cc002479c3d20848066c689b18ebdf1db75e87f1c451b1526e550789e7a63fa"
	I1202 16:08:12.943988  478262 cri.go:89] found id: "cdfe7eda529156977893291247b97065289958fe65cbac19931af954d1f7e904"
	I1202 16:08:12.943991  478262 cri.go:89] found id: "132312565fa9df9459ca2fab422a4a035d2dd56ac519dec4d9ca9c4397bc628b"
	I1202 16:08:12.943993  478262 cri.go:89] found id: ""
	I1202 16:08:12.944030  478262 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 16:08:12.959284  478262 out.go:203] 
	W1202 16:08:12.960501  478262 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T16:08:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T16:08:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 16:08:12.960531  478262 out.go:285] * 
	* 
	W1202 16:08:12.964552  478262 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 16:08:12.965707  478262 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-907557 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-907557
helpers_test.go:243: (dbg) docker inspect pause-907557:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1703ec85b899598dbc5fc149aa069267a53515122a6a3a2b021ec0e6ad44fd93",
	        "Created": "2025-12-02T16:07:13.118033261Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 462078,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T16:07:13.231842186Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/1703ec85b899598dbc5fc149aa069267a53515122a6a3a2b021ec0e6ad44fd93/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1703ec85b899598dbc5fc149aa069267a53515122a6a3a2b021ec0e6ad44fd93/hostname",
	        "HostsPath": "/var/lib/docker/containers/1703ec85b899598dbc5fc149aa069267a53515122a6a3a2b021ec0e6ad44fd93/hosts",
	        "LogPath": "/var/lib/docker/containers/1703ec85b899598dbc5fc149aa069267a53515122a6a3a2b021ec0e6ad44fd93/1703ec85b899598dbc5fc149aa069267a53515122a6a3a2b021ec0e6ad44fd93-json.log",
	        "Name": "/pause-907557",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-907557:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-907557",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1703ec85b899598dbc5fc149aa069267a53515122a6a3a2b021ec0e6ad44fd93",
	                "LowerDir": "/var/lib/docker/overlay2/d02d7a352a775308f0914038d3d1a1bcb04fea5d36d1d76375f924ef3a2c24df-init/diff:/var/lib/docker/overlay2/ab98578cee54140c21ba2edb7c02601b9799fbaa027f05ce4daaae66d198c082/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d02d7a352a775308f0914038d3d1a1bcb04fea5d36d1d76375f924ef3a2c24df/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d02d7a352a775308f0914038d3d1a1bcb04fea5d36d1d76375f924ef3a2c24df/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d02d7a352a775308f0914038d3d1a1bcb04fea5d36d1d76375f924ef3a2c24df/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-907557",
	                "Source": "/var/lib/docker/volumes/pause-907557/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-907557",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-907557",
	                "name.minikube.sigs.k8s.io": "pause-907557",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a888f19e369313d7cdccded30acab63611faab6ac0522d47662f5acccd4248b0",
	            "SandboxKey": "/var/run/docker/netns/a888f19e3693",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-907557": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "dbdc5e8fde6f66d7813af3b29cbabf22efadef370f7024cd569312c85aaf9c38",
	                    "EndpointID": "23c2ac36d67c52dfaed52ae4376d1165509445931bfc4a85c7017f4ad7d597fd",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "36:ce:7f:19:57:0b",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-907557",
	                        "1703ec85b899"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-907557 -n pause-907557
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-907557 -n pause-907557: exit status 2 (356.516651ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-907557 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-907557 logs -n 25: (1.039762271s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                       ARGS                                                        │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p scheduled-stop-259576 --memory=3072 --driver=docker  --container-runtime=crio                                  │ scheduled-stop-259576       │ jenkins │ v1.37.0 │ 02 Dec 25 16:05 UTC │ 02 Dec 25 16:05 UTC │
	│ stop    │ -p scheduled-stop-259576 --schedule 5m -v=5 --alsologtostderr                                                     │ scheduled-stop-259576       │ jenkins │ v1.37.0 │ 02 Dec 25 16:05 UTC │                     │
	│ stop    │ -p scheduled-stop-259576 --schedule 5m -v=5 --alsologtostderr                                                     │ scheduled-stop-259576       │ jenkins │ v1.37.0 │ 02 Dec 25 16:05 UTC │                     │
	│ stop    │ -p scheduled-stop-259576 --schedule 5m -v=5 --alsologtostderr                                                     │ scheduled-stop-259576       │ jenkins │ v1.37.0 │ 02 Dec 25 16:05 UTC │                     │
	│ stop    │ -p scheduled-stop-259576 --schedule 15s -v=5 --alsologtostderr                                                    │ scheduled-stop-259576       │ jenkins │ v1.37.0 │ 02 Dec 25 16:05 UTC │                     │
	│ stop    │ -p scheduled-stop-259576 --schedule 15s -v=5 --alsologtostderr                                                    │ scheduled-stop-259576       │ jenkins │ v1.37.0 │ 02 Dec 25 16:05 UTC │                     │
	│ stop    │ -p scheduled-stop-259576 --schedule 15s -v=5 --alsologtostderr                                                    │ scheduled-stop-259576       │ jenkins │ v1.37.0 │ 02 Dec 25 16:05 UTC │                     │
	│ stop    │ -p scheduled-stop-259576 --cancel-scheduled                                                                       │ scheduled-stop-259576       │ jenkins │ v1.37.0 │ 02 Dec 25 16:05 UTC │ 02 Dec 25 16:05 UTC │
	│ stop    │ -p scheduled-stop-259576 --schedule 15s -v=5 --alsologtostderr                                                    │ scheduled-stop-259576       │ jenkins │ v1.37.0 │ 02 Dec 25 16:06 UTC │                     │
	│ stop    │ -p scheduled-stop-259576 --schedule 15s -v=5 --alsologtostderr                                                    │ scheduled-stop-259576       │ jenkins │ v1.37.0 │ 02 Dec 25 16:06 UTC │                     │
	│ stop    │ -p scheduled-stop-259576 --schedule 15s -v=5 --alsologtostderr                                                    │ scheduled-stop-259576       │ jenkins │ v1.37.0 │ 02 Dec 25 16:06 UTC │ 02 Dec 25 16:06 UTC │
	│ delete  │ -p scheduled-stop-259576                                                                                          │ scheduled-stop-259576       │ jenkins │ v1.37.0 │ 02 Dec 25 16:06 UTC │ 02 Dec 25 16:06 UTC │
	│ start   │ -p insufficient-storage-319725 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio  │ insufficient-storage-319725 │ jenkins │ v1.37.0 │ 02 Dec 25 16:06 UTC │                     │
	│ delete  │ -p insufficient-storage-319725                                                                                    │ insufficient-storage-319725 │ jenkins │ v1.37.0 │ 02 Dec 25 16:07 UTC │ 02 Dec 25 16:07 UTC │
	│ start   │ -p pause-907557 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio         │ pause-907557                │ jenkins │ v1.37.0 │ 02 Dec 25 16:07 UTC │ 02 Dec 25 16:08 UTC │
	│ start   │ -p offline-crio-893562 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio │ offline-crio-893562         │ jenkins │ v1.37.0 │ 02 Dec 25 16:07 UTC │ 02 Dec 25 16:07 UTC │
	│ start   │ -p running-upgrade-136818 --memory=3072 --vm-driver=docker  --container-runtime=crio                              │ running-upgrade-136818      │ jenkins │ v1.35.0 │ 02 Dec 25 16:07 UTC │ 02 Dec 25 16:07 UTC │
	│ start   │ -p stopped-upgrade-937293 --memory=3072 --vm-driver=docker  --container-runtime=crio                              │ stopped-upgrade-937293      │ jenkins │ v1.35.0 │ 02 Dec 25 16:07 UTC │ 02 Dec 25 16:07 UTC │
	│ delete  │ -p offline-crio-893562                                                                                            │ offline-crio-893562         │ jenkins │ v1.37.0 │ 02 Dec 25 16:07 UTC │ 02 Dec 25 16:07 UTC │
	│ stop    │ stopped-upgrade-937293 stop                                                                                       │ stopped-upgrade-937293      │ jenkins │ v1.35.0 │ 02 Dec 25 16:07 UTC │ 02 Dec 25 16:08 UTC │
	│ start   │ -p running-upgrade-136818 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio          │ running-upgrade-136818      │ jenkins │ v1.37.0 │ 02 Dec 25 16:07 UTC │                     │
	│ start   │ -p missing-upgrade-881462 --memory=3072 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-881462      │ jenkins │ v1.35.0 │ 02 Dec 25 16:07 UTC │                     │
	│ start   │ -p stopped-upgrade-937293 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio          │ stopped-upgrade-937293      │ jenkins │ v1.37.0 │ 02 Dec 25 16:08 UTC │                     │
	│ start   │ -p pause-907557 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                  │ pause-907557                │ jenkins │ v1.37.0 │ 02 Dec 25 16:08 UTC │ 02 Dec 25 16:08 UTC │
	│ pause   │ -p pause-907557 --alsologtostderr -v=5                                                                            │ pause-907557                │ jenkins │ v1.37.0 │ 02 Dec 25 16:08 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 16:08:04
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 16:08:04.660286  474687 out.go:360] Setting OutFile to fd 1 ...
	I1202 16:08:04.660602  474687 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:08:04.660617  474687 out.go:374] Setting ErrFile to fd 2...
	I1202 16:08:04.660622  474687 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:08:04.660939  474687 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 16:08:04.661417  474687 out.go:368] Setting JSON to false
	I1202 16:08:04.662981  474687 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":10226,"bootTime":1764681459,"procs":300,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 16:08:04.663094  474687 start.go:143] virtualization: kvm guest
	I1202 16:08:04.665241  474687 out.go:179] * [pause-907557] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 16:08:04.667410  474687 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 16:08:04.667460  474687 notify.go:221] Checking for updates...
	I1202 16:08:04.670019  474687 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 16:08:04.673302  474687 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 16:08:04.674597  474687 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-264555/.minikube
	I1202 16:08:04.675744  474687 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 16:08:04.680254  474687 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 16:08:04.682115  474687 config.go:182] Loaded profile config "pause-907557": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 16:08:04.682972  474687 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 16:08:04.711666  474687 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 16:08:04.711798  474687 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 16:08:04.800599  474687 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:82 OomKillDisable:false NGoroutines:89 SystemTime:2025-12-02 16:08:04.786635445 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 16:08:04.801221  474687 docker.go:319] overlay module found
	I1202 16:08:04.805254  474687 out.go:179] * Using the docker driver based on existing profile
	I1202 16:08:04.806522  474687 start.go:309] selected driver: docker
	I1202 16:08:04.806545  474687 start.go:927] validating driver "docker" against &{Name:pause-907557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-907557 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 16:08:04.806755  474687 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 16:08:04.806888  474687 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 16:08:04.903441  474687 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:84 OomKillDisable:false NGoroutines:90 SystemTime:2025-12-02 16:08:04.889586367 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 16:08:04.904454  474687 cni.go:84] Creating CNI manager for ""
	I1202 16:08:04.904539  474687 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 16:08:04.904622  474687 start.go:353] cluster config:
	{Name:pause-907557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-907557 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 16:08:04.908854  474687 out.go:179] * Starting "pause-907557" primary control-plane node in "pause-907557" cluster
	I1202 16:08:04.910224  474687 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 16:08:04.911994  474687 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 16:08:04.913813  474687 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 16:08:04.913856  474687 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22021-264555/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1202 16:08:04.913868  474687 cache.go:65] Caching tarball of preloaded images
	I1202 16:08:04.913854  474687 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 16:08:04.913988  474687 preload.go:238] Found /home/jenkins/minikube-integration/22021-264555/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 16:08:04.913999  474687 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 16:08:04.914174  474687 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/pause-907557/config.json ...
	I1202 16:08:04.942384  474687 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 16:08:04.942488  474687 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 16:08:04.942535  474687 cache.go:243] Successfully downloaded all kic artifacts
	I1202 16:08:04.942612  474687 start.go:360] acquireMachinesLock for pause-907557: {Name:mkcf3bb036c9115abf66275504f1edf44ef5f737 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:08:04.942782  474687 start.go:364] duration metric: took 56.275µs to acquireMachinesLock for "pause-907557"
	I1202 16:08:04.942838  474687 start.go:96] Skipping create...Using existing machine configuration
	I1202 16:08:04.942848  474687 fix.go:54] fixHost starting: 
	I1202 16:08:04.943151  474687 cli_runner.go:164] Run: docker container inspect pause-907557 --format={{.State.Status}}
	I1202 16:08:04.979618  474687 fix.go:112] recreateIfNeeded on pause-907557: state=Running err=<nil>
	W1202 16:08:04.979644  474687 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 16:08:00.637166  473081 out.go:252] * Restarting existing docker container for "stopped-upgrade-937293" ...
	I1202 16:08:00.637296  473081 cli_runner.go:164] Run: docker start stopped-upgrade-937293
	I1202 16:08:00.970562  473081 cli_runner.go:164] Run: docker container inspect stopped-upgrade-937293 --format={{.State.Status}}
	I1202 16:08:00.995085  473081 kic.go:430] container "stopped-upgrade-937293" state is running.
	I1202 16:08:00.996079  473081 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-937293
	I1202 16:08:01.023639  473081 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/stopped-upgrade-937293/config.json ...
	I1202 16:08:01.023946  473081 machine.go:94] provisionDockerMachine start ...
	I1202 16:08:01.024030  473081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-937293
	I1202 16:08:01.051546  473081 main.go:143] libmachine: Using SSH client type: native
	I1202 16:08:01.051905  473081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1202 16:08:01.051922  473081 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 16:08:01.052746  473081 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33716->127.0.0.1:33119: read: connection reset by peer
	I1202 16:08:04.187005  473081 main.go:143] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-937293
	
	I1202 16:08:04.187045  473081 ubuntu.go:182] provisioning hostname "stopped-upgrade-937293"
	I1202 16:08:04.187121  473081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-937293
	I1202 16:08:04.207830  473081 main.go:143] libmachine: Using SSH client type: native
	I1202 16:08:04.208158  473081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1202 16:08:04.208181  473081 main.go:143] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-937293 && echo "stopped-upgrade-937293" | sudo tee /etc/hostname
	I1202 16:08:04.357518  473081 main.go:143] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-937293
	
	I1202 16:08:04.357630  473081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-937293
	I1202 16:08:04.379822  473081 main.go:143] libmachine: Using SSH client type: native
	I1202 16:08:04.380122  473081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1202 16:08:04.380149  473081 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-937293' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-937293/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-937293' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 16:08:04.516702  473081 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 16:08:04.516735  473081 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-264555/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-264555/.minikube}
	I1202 16:08:04.516765  473081 ubuntu.go:190] setting up certificates
	I1202 16:08:04.516778  473081 provision.go:84] configureAuth start
	I1202 16:08:04.516843  473081 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-937293
	I1202 16:08:04.540913  473081 provision.go:143] copyHostCerts
	I1202 16:08:04.540981  473081 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem, removing ...
	I1202 16:08:04.540994  473081 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem
	I1202 16:08:04.541076  473081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem (1082 bytes)
	I1202 16:08:04.541193  473081 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem, removing ...
	I1202 16:08:04.541200  473081 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem
	I1202 16:08:04.541241  473081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem (1123 bytes)
	I1202 16:08:04.541319  473081 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem, removing ...
	I1202 16:08:04.541501  473081 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem
	I1202 16:08:04.541581  473081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem (1675 bytes)
	I1202 16:08:04.541721  473081 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-937293 san=[127.0.0.1 192.168.103.2 localhost minikube stopped-upgrade-937293]
	I1202 16:08:04.675802  473081 provision.go:177] copyRemoteCerts
	I1202 16:08:04.675872  473081 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 16:08:04.675949  473081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-937293
	I1202 16:08:04.699013  473081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/stopped-upgrade-937293/id_rsa Username:docker}
	I1202 16:08:04.807550  473081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1202 16:08:04.864066  473081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 16:08:04.906701  473081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 16:08:04.946022  473081 provision.go:87] duration metric: took 429.226837ms to configureAuth
	I1202 16:08:04.946051  473081 ubuntu.go:206] setting minikube options for container-runtime
	I1202 16:08:04.946250  473081 config.go:182] Loaded profile config "stopped-upgrade-937293": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1202 16:08:04.946388  473081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-937293
	I1202 16:08:04.979036  473081 main.go:143] libmachine: Using SSH client type: native
	I1202 16:08:04.979516  473081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1202 16:08:04.979539  473081 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 16:08:05.377267  473081 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 16:08:05.377295  473081 machine.go:97] duration metric: took 4.353329264s to provisionDockerMachine
	I1202 16:08:05.377309  473081 start.go:293] postStartSetup for "stopped-upgrade-937293" (driver="docker")
	I1202 16:08:05.377322  473081 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 16:08:05.377384  473081 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 16:08:05.377442  473081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-937293
	I1202 16:08:05.402515  473081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/stopped-upgrade-937293/id_rsa Username:docker}
	I1202 16:08:05.510037  473081 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 16:08:05.515043  473081 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 16:08:05.515095  473081 main.go:143] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1202 16:08:05.515106  473081 main.go:143] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1202 16:08:05.515123  473081 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1202 16:08:05.515142  473081 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-264555/.minikube/addons for local assets ...
	I1202 16:08:05.515207  473081 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-264555/.minikube/files for local assets ...
	I1202 16:08:05.515306  473081 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem -> 2680992.pem in /etc/ssl/certs
	I1202 16:08:05.515452  473081 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 16:08:05.528147  473081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem --> /etc/ssl/certs/2680992.pem (1708 bytes)
	I1202 16:08:05.556820  473081 start.go:296] duration metric: took 179.496111ms for postStartSetup
	I1202 16:08:05.556905  473081 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 16:08:05.556942  473081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-937293
	I1202 16:08:05.582202  473081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/stopped-upgrade-937293/id_rsa Username:docker}
	I1202 16:08:05.681104  473081 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 16:08:05.688213  473081 fix.go:56] duration metric: took 5.079350199s for fixHost
	I1202 16:08:05.688241  473081 start.go:83] releasing machines lock for "stopped-upgrade-937293", held for 5.079404594s
	I1202 16:08:05.688309  473081 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-937293
	I1202 16:08:05.712241  473081 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 16:08:05.712310  473081 ssh_runner.go:195] Run: cat /version.json
	I1202 16:08:05.712336  473081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-937293
	I1202 16:08:05.712361  473081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-937293
	I1202 16:08:05.736195  473081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/stopped-upgrade-937293/id_rsa Username:docker}
	I1202 16:08:05.738318  473081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/stopped-upgrade-937293/id_rsa Username:docker}
	W1202 16:08:05.915849  473081 out.go:285] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.35.0 -> Actual minikube version: v1.37.0
	I1202 16:08:05.915948  473081 ssh_runner.go:195] Run: systemctl --version
	I1202 16:08:05.921646  473081 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 16:08:06.072409  473081 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1202 16:08:06.079884  473081 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 16:08:06.093718  473081 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1202 16:08:06.093803  473081 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 16:08:06.107788  473081 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 16:08:06.107884  473081 start.go:496] detecting cgroup driver to use...
	I1202 16:08:06.107925  473081 detect.go:190] detected "systemd" cgroup driver on host os
	I1202 16:08:06.107990  473081 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 16:08:06.123647  473081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 16:08:06.139271  473081 docker.go:218] disabling cri-docker service (if available) ...
	I1202 16:08:06.139344  473081 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 16:08:06.154995  473081 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 16:08:06.169454  473081 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 16:08:06.247760  473081 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 16:08:06.328557  473081 docker.go:234] disabling docker service ...
	I1202 16:08:06.328637  473081 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 16:08:06.342836  473081 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 16:08:06.355678  473081 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 16:08:06.428680  473081 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 16:08:06.517340  473081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 16:08:06.532107  473081 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 16:08:06.556936  473081 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1202 16:08:06.556984  473081 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:08:06.572569  473081 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1202 16:08:06.572628  473081 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:08:06.585906  473081 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:08:06.599941  473081 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:08:06.613022  473081 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 16:08:06.625470  473081 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:08:06.638163  473081 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:08:06.649894  473081 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:08:06.661374  473081 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 16:08:06.671280  473081 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 16:08:06.681481  473081 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 16:08:06.757943  473081 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 16:08:06.872338  473081 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 16:08:06.872418  473081 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 16:08:06.876829  473081 start.go:564] Will wait 60s for crictl version
	I1202 16:08:06.876895  473081 ssh_runner.go:195] Run: which crictl
	I1202 16:08:06.881154  473081 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1202 16:08:06.922061  473081 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1202 16:08:06.922145  473081 ssh_runner.go:195] Run: crio --version
	I1202 16:08:06.970124  473081 ssh_runner.go:195] Run: crio --version
	I1202 16:08:07.010620  473081 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.24.6 ...
	I1202 16:08:04.982941  474687 out.go:252] * Updating the running docker "pause-907557" container ...
	I1202 16:08:04.982988  474687 machine.go:94] provisionDockerMachine start ...
	I1202 16:08:04.983071  474687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-907557
	I1202 16:08:05.008214  474687 main.go:143] libmachine: Using SSH client type: native
	I1202 16:08:05.008693  474687 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1202 16:08:05.008714  474687 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 16:08:05.189663  474687 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-907557
	
	I1202 16:08:05.189699  474687 ubuntu.go:182] provisioning hostname "pause-907557"
	I1202 16:08:05.189865  474687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-907557
	I1202 16:08:05.221835  474687 main.go:143] libmachine: Using SSH client type: native
	I1202 16:08:05.222220  474687 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1202 16:08:05.222286  474687 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-907557 && echo "pause-907557" | sudo tee /etc/hostname
	I1202 16:08:05.410252  474687 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-907557
	
	I1202 16:08:05.410332  474687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-907557
	I1202 16:08:05.435758  474687 main.go:143] libmachine: Using SSH client type: native
	I1202 16:08:05.436408  474687 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1202 16:08:05.436448  474687 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-907557' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-907557/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-907557' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 16:08:05.602825  474687 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 16:08:05.602867  474687 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-264555/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-264555/.minikube}
	I1202 16:08:05.602901  474687 ubuntu.go:190] setting up certificates
	I1202 16:08:05.602913  474687 provision.go:84] configureAuth start
	I1202 16:08:05.602977  474687 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-907557
	I1202 16:08:05.627502  474687 provision.go:143] copyHostCerts
	I1202 16:08:05.627569  474687 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem, removing ...
	I1202 16:08:05.627583  474687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem
	I1202 16:08:05.627668  474687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem (1082 bytes)
	I1202 16:08:05.627792  474687 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem, removing ...
	I1202 16:08:05.627803  474687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem
	I1202 16:08:05.627839  474687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem (1123 bytes)
	I1202 16:08:05.627922  474687 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem, removing ...
	I1202 16:08:05.627931  474687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem
	I1202 16:08:05.627963  474687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem (1675 bytes)
	I1202 16:08:05.628035  474687 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem org=jenkins.pause-907557 san=[127.0.0.1 192.168.85.2 localhost minikube pause-907557]
	I1202 16:08:05.661184  474687 provision.go:177] copyRemoteCerts
	I1202 16:08:05.661254  474687 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 16:08:05.661297  474687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-907557
	I1202 16:08:05.687062  474687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/pause-907557/id_rsa Username:docker}
	I1202 16:08:05.798494  474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 16:08:05.817933  474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1202 16:08:05.839154  474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 16:08:05.858902  474687 provision.go:87] duration metric: took 255.972432ms to configureAuth
	I1202 16:08:05.858935  474687 ubuntu.go:206] setting minikube options for container-runtime
	I1202 16:08:05.859198  474687 config.go:182] Loaded profile config "pause-907557": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 16:08:05.859316  474687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-907557
	I1202 16:08:05.880766  474687 main.go:143] libmachine: Using SSH client type: native
	I1202 16:08:05.881086  474687 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1202 16:08:05.881111  474687 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 16:08:06.256180  474687 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 16:08:06.256211  474687 machine.go:97] duration metric: took 1.273213936s to provisionDockerMachine
	I1202 16:08:06.256227  474687 start.go:293] postStartSetup for "pause-907557" (driver="docker")
	I1202 16:08:06.256243  474687 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 16:08:06.256317  474687 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 16:08:06.256373  474687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-907557
	I1202 16:08:06.282408  474687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/pause-907557/id_rsa Username:docker}
	I1202 16:08:06.390079  474687 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 16:08:06.395016  474687 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 16:08:06.395046  474687 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 16:08:06.395057  474687 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-264555/.minikube/addons for local assets ...
	I1202 16:08:06.395115  474687 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-264555/.minikube/files for local assets ...
	I1202 16:08:06.395200  474687 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem -> 2680992.pem in /etc/ssl/certs
	I1202 16:08:06.395318  474687 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 16:08:06.403514  474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem --> /etc/ssl/certs/2680992.pem (1708 bytes)
	I1202 16:08:06.421361  474687 start.go:296] duration metric: took 165.11158ms for postStartSetup
	I1202 16:08:06.421461  474687 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 16:08:06.421510  474687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-907557
	I1202 16:08:06.443969  474687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/pause-907557/id_rsa Username:docker}
	I1202 16:08:06.556968  474687 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 16:08:06.563016  474687 fix.go:56] duration metric: took 1.62016321s for fixHost
	I1202 16:08:06.563043  474687 start.go:83] releasing machines lock for "pause-907557", held for 1.620247106s
	I1202 16:08:06.563109  474687 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-907557
	I1202 16:08:06.585223  474687 ssh_runner.go:195] Run: cat /version.json
	I1202 16:08:06.585287  474687 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 16:08:06.585304  474687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-907557
	I1202 16:08:06.585385  474687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-907557
	I1202 16:08:06.607589  474687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/pause-907557/id_rsa Username:docker}
	I1202 16:08:06.608880  474687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/pause-907557/id_rsa Username:docker}
	I1202 16:08:06.775414  474687 ssh_runner.go:195] Run: systemctl --version
	I1202 16:08:06.782792  474687 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 16:08:06.826227  474687 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 16:08:06.831854  474687 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 16:08:06.831930  474687 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 16:08:06.841100  474687 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 16:08:06.841133  474687 start.go:496] detecting cgroup driver to use...
	I1202 16:08:06.841176  474687 detect.go:190] detected "systemd" cgroup driver on host os
	I1202 16:08:06.841220  474687 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 16:08:06.859500  474687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 16:08:06.873485  474687 docker.go:218] disabling cri-docker service (if available) ...
	I1202 16:08:06.873549  474687 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 16:08:06.890738  474687 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 16:08:06.905557  474687 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 16:08:07.040214  474687 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 16:08:07.163352  474687 docker.go:234] disabling docker service ...
	I1202 16:08:07.163414  474687 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 16:08:07.180905  474687 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 16:08:07.195391  474687 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 16:08:07.316560  474687 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 16:08:07.463055  474687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 16:08:07.481721  474687 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 16:08:07.502642  474687 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 16:08:07.502717  474687 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:08:07.513328  474687 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1202 16:08:07.513401  474687 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:08:07.524560  474687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:08:07.534291  474687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:08:07.547686  474687 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 16:08:07.558903  474687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:08:07.570344  474687 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:08:07.579417  474687 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:08:07.589555  474687 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 16:08:07.598498  474687 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 16:08:07.608586  474687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 16:08:07.739731  474687 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 16:08:07.942971  474687 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 16:08:07.943045  474687 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 16:08:07.948155  474687 start.go:564] Will wait 60s for crictl version
	I1202 16:08:07.948232  474687 ssh_runner.go:195] Run: which crictl
	I1202 16:08:07.953310  474687 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 16:08:07.986055  474687 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 16:08:07.986144  474687 ssh_runner.go:195] Run: crio --version
	I1202 16:08:08.026605  474687 ssh_runner.go:195] Run: crio --version
	I1202 16:08:08.083009  474687 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 16:08:04.326961  472164 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22021-264555/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-881462:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir: (4.026194366s)
	I1202 16:08:04.326993  472164 kic.go:203] duration metric: took 4.026374721s to extract preloaded images to volume ...
	W1202 16:08:04.327092  472164 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1202 16:08:04.327124  472164 oci.go:249] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1202 16:08:04.327182  472164 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1202 16:08:04.390144  472164 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-881462 --name missing-upgrade-881462 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-881462 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-881462 --network missing-upgrade-881462 --ip 192.168.76.2 --volume missing-upgrade-881462:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279
	I1202 16:08:04.743168  472164 cli_runner.go:164] Run: docker container inspect missing-upgrade-881462 --format={{.State.Running}}
	I1202 16:08:04.777402  472164 cli_runner.go:164] Run: docker container inspect missing-upgrade-881462 --format={{.State.Status}}
	I1202 16:08:04.809004  472164 cli_runner.go:164] Run: docker exec missing-upgrade-881462 stat /var/lib/dpkg/alternatives/iptables
	I1202 16:08:04.891570  472164 oci.go:144] the created container "missing-upgrade-881462" has a running status.
	I1202 16:08:04.891606  472164 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22021-264555/.minikube/machines/missing-upgrade-881462/id_rsa...
	I1202 16:08:05.060364  472164 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22021-264555/.minikube/machines/missing-upgrade-881462/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1202 16:08:05.095893  472164 cli_runner.go:164] Run: docker container inspect missing-upgrade-881462 --format={{.State.Status}}
	I1202 16:08:05.127538  472164 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1202 16:08:05.127553  472164 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-881462 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1202 16:08:05.204063  472164 cli_runner.go:164] Run: docker container inspect missing-upgrade-881462 --format={{.State.Status}}
	I1202 16:08:05.235185  472164 machine.go:93] provisionDockerMachine start ...
	I1202 16:08:05.235276  472164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-881462
	I1202 16:08:05.268280  472164 main.go:141] libmachine: Using SSH client type: native
	I1202 16:08:05.268673  472164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1202 16:08:05.268685  472164 main.go:141] libmachine: About to run SSH command:
	hostname
	I1202 16:08:05.418615  472164 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-881462
	
	I1202 16:08:05.418637  472164 ubuntu.go:169] provisioning hostname "missing-upgrade-881462"
	I1202 16:08:05.418712  472164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-881462
	I1202 16:08:05.443838  472164 main.go:141] libmachine: Using SSH client type: native
	I1202 16:08:05.444137  472164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1202 16:08:05.444149  472164 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-881462 && echo "missing-upgrade-881462" | sudo tee /etc/hostname
	I1202 16:08:05.607615  472164 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-881462
	
	I1202 16:08:05.607698  472164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-881462
	I1202 16:08:05.631992  472164 main.go:141] libmachine: Using SSH client type: native
	I1202 16:08:05.632240  472164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1202 16:08:05.632265  472164 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-881462' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-881462/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-881462' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 16:08:05.774559  472164 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 16:08:05.774583  472164 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/22021-264555/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-264555/.minikube}
	I1202 16:08:05.774632  472164 ubuntu.go:177] setting up certificates
	I1202 16:08:05.774648  472164 provision.go:84] configureAuth start
	I1202 16:08:05.774743  472164 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-881462
	I1202 16:08:05.798524  472164 provision.go:143] copyHostCerts
	I1202 16:08:05.798686  472164 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem, removing ...
	I1202 16:08:05.798698  472164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem
	I1202 16:08:05.798888  472164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem (1082 bytes)
	I1202 16:08:05.799054  472164 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem, removing ...
	I1202 16:08:05.799066  472164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem
	I1202 16:08:05.799116  472164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem (1123 bytes)
	I1202 16:08:05.799205  472164 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem, removing ...
	I1202 16:08:05.799212  472164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem
	I1202 16:08:05.799247  472164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem (1675 bytes)
	I1202 16:08:05.799318  472164 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-881462 san=[127.0.0.1 192.168.76.2 localhost minikube missing-upgrade-881462]
	I1202 16:08:06.057617  472164 provision.go:177] copyRemoteCerts
	I1202 16:08:06.057664  472164 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 16:08:06.057697  472164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-881462
	I1202 16:08:06.083556  472164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/missing-upgrade-881462/id_rsa Username:docker}
	I1202 16:08:06.185403  472164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 16:08:06.220630  472164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1202 16:08:06.249591  472164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 16:08:06.284995  472164 provision.go:87] duration metric: took 510.33204ms to configureAuth
	I1202 16:08:06.285022  472164 ubuntu.go:193] setting minikube options for container-runtime
	I1202 16:08:06.285240  472164 config.go:182] Loaded profile config "missing-upgrade-881462": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1202 16:08:06.285396  472164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-881462
	I1202 16:08:06.306649  472164 main.go:141] libmachine: Using SSH client type: native
	I1202 16:08:06.306875  472164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1202 16:08:06.306891  472164 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 16:08:06.575405  472164 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 16:08:06.575437  472164 machine.go:96] duration metric: took 1.340219581s to provisionDockerMachine
	I1202 16:08:06.575449  472164 client.go:171] duration metric: took 7.436626322s to LocalClient.Create
	I1202 16:08:06.575470  472164 start.go:167] duration metric: took 7.436688509s to libmachine.API.Create "missing-upgrade-881462"
	I1202 16:08:06.575476  472164 start.go:293] postStartSetup for "missing-upgrade-881462" (driver="docker")
	I1202 16:08:06.575485  472164 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 16:08:06.575539  472164 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 16:08:06.575575  472164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-881462
	I1202 16:08:06.598831  472164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/missing-upgrade-881462/id_rsa Username:docker}
	I1202 16:08:06.698199  472164 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 16:08:06.702307  472164 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 16:08:06.702346  472164 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1202 16:08:06.702353  472164 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1202 16:08:06.702358  472164 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1202 16:08:06.702370  472164 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-264555/.minikube/addons for local assets ...
	I1202 16:08:06.702469  472164 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-264555/.minikube/files for local assets ...
	I1202 16:08:06.702566  472164 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem -> 2680992.pem in /etc/ssl/certs
	I1202 16:08:06.702708  472164 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 16:08:06.718080  472164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem --> /etc/ssl/certs/2680992.pem (1708 bytes)
	I1202 16:08:06.749529  472164 start.go:296] duration metric: took 174.036223ms for postStartSetup
	I1202 16:08:06.750012  472164 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-881462
	I1202 16:08:06.770082  472164 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/config.json ...
	I1202 16:08:06.770380  472164 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 16:08:06.770438  472164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-881462
	I1202 16:08:06.790968  472164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/missing-upgrade-881462/id_rsa Username:docker}
	I1202 16:08:06.883900  472164 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 16:08:06.889016  472164 start.go:128] duration metric: took 7.754169944s to createHost
	I1202 16:08:06.889038  472164 start.go:83] releasing machines lock for "missing-upgrade-881462", held for 7.754327934s
	I1202 16:08:06.889122  472164 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-881462
	I1202 16:08:06.909482  472164 ssh_runner.go:195] Run: cat /version.json
	I1202 16:08:06.909506  472164 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 16:08:06.909529  472164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-881462
	I1202 16:08:06.909591  472164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-881462
	I1202 16:08:06.931103  472164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/missing-upgrade-881462/id_rsa Username:docker}
	I1202 16:08:06.932339  472164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/missing-upgrade-881462/id_rsa Username:docker}
	I1202 16:08:07.118787  472164 ssh_runner.go:195] Run: systemctl --version
	I1202 16:08:07.123895  472164 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 16:08:07.269545  472164 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1202 16:08:07.274280  472164 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 16:08:07.298806  472164 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1202 16:08:07.298889  472164 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 16:08:07.339796  472164 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1202 16:08:07.339814  472164 start.go:495] detecting cgroup driver to use...
	I1202 16:08:07.339856  472164 detect.go:190] detected "systemd" cgroup driver on host os
	I1202 16:08:07.339909  472164 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 16:08:07.365626  472164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 16:08:07.380026  472164 docker.go:217] disabling cri-docker service (if available) ...
	I1202 16:08:07.380067  472164 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 16:08:07.397682  472164 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 16:08:07.416842  472164 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 16:08:07.508273  472164 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 16:08:07.591760  472164 docker.go:233] disabling docker service ...
	I1202 16:08:07.591824  472164 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 16:08:07.613065  472164 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 16:08:07.626898  472164 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 16:08:07.710254  472164 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 16:08:07.840612  472164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 16:08:07.854785  472164 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 16:08:07.874703  472164 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1202 16:08:07.874764  472164 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:08:07.888989  472164 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1202 16:08:07.889042  472164 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:08:07.900565  472164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:08:07.914406  472164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:08:07.927501  472164 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 16:08:07.939016  472164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:08:07.951536  472164 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:08:07.977006  472164 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:08:07.990798  472164 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 16:08:08.005239  472164 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 16:08:08.019767  472164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 16:08:08.180053  472164 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 16:08:08.282078  472164 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 16:08:08.282133  472164 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 16:08:08.286260  472164 start.go:563] Will wait 60s for crictl version
	I1202 16:08:08.286324  472164 ssh_runner.go:195] Run: which crictl
	I1202 16:08:08.290897  472164 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1202 16:08:08.331093  472164 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1202 16:08:08.331151  472164 ssh_runner.go:195] Run: crio --version
	I1202 16:08:08.375458  472164 ssh_runner.go:195] Run: crio --version
	I1202 16:08:08.423553  472164 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.24.6 ...
	I1202 16:08:03.863736  472154 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 16:08:03.883779  472154 ssh_runner.go:195] Run: openssl version
	I1202 16:08:03.889983  472154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2680992.pem && ln -fs /usr/share/ca-certificates/2680992.pem /etc/ssl/certs/2680992.pem"
	I1202 16:08:03.900808  472154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2680992.pem
	I1202 16:08:03.904758  472154 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 15:33 /usr/share/ca-certificates/2680992.pem
	I1202 16:08:03.904819  472154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2680992.pem
	I1202 16:08:03.912079  472154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2680992.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 16:08:03.922300  472154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 16:08:03.932880  472154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:08:03.936697  472154 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 15:16 /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:08:03.936760  472154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:08:03.943805  472154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 16:08:03.954020  472154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/268099.pem && ln -fs /usr/share/ca-certificates/268099.pem /etc/ssl/certs/268099.pem"
	I1202 16:08:03.964821  472154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/268099.pem
	I1202 16:08:03.968715  472154 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 15:33 /usr/share/ca-certificates/268099.pem
	I1202 16:08:03.968788  472154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/268099.pem
	I1202 16:08:03.976251  472154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/268099.pem /etc/ssl/certs/51391683.0"
	I1202 16:08:03.986284  472154 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 16:08:03.990280  472154 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 16:08:03.997288  472154 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 16:08:04.004495  472154 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 16:08:04.011788  472154 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 16:08:04.018538  472154 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 16:08:04.025258  472154 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 16:08:04.039079  472154 kubeadm.go:401] StartCluster: {Name:running-upgrade-136818 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:running-upgrade-136818 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 16:08:04.039163  472154 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 16:08:04.039226  472154 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 16:08:04.081169  472154 cri.go:89] found id: "903da8d09c9e6cda86061dca786e6a99ceccaa240d6ae1caa073a6c8e2ddc710"
	I1202 16:08:04.081192  472154 cri.go:89] found id: "b3cf1c8a165609837ab3dcc2484db211f1289eb9490c7384eed87b91d88282d8"
	I1202 16:08:04.081196  472154 cri.go:89] found id: "498eb449fe529c0dee1e99f4fb5244bb816db07782983b197d065610e40f0f9b"
	I1202 16:08:04.081199  472154 cri.go:89] found id: "855c0ebf3bc431a4ca21a5e52e7b6c9867ade7c5db9b878c7587555d84b1bc07"
	I1202 16:08:04.081202  472154 cri.go:89] found id: ""
	I1202 16:08:04.081239  472154 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 16:08:04.101216  472154 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"498eb449fe529c0dee1e99f4fb5244bb816db07782983b197d065610e40f0f9b","pid":1389,"status":"running","bundle":"/run/containers/storage/overlay-containers/498eb449fe529c0dee1e99f4fb5244bb816db07782983b197d065610e40f0f9b/userdata","rootfs":"/var/lib/containers/storage/overlay/043b20234f45aa9c720cc9109a5364484d15d9370bbb576096d5f1eed88c10f6/merged","created":"2025-12-02T16:07:52.005955863Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"bf915d6a","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"bf915d6a\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.termina
tionMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"498eb449fe529c0dee1e99f4fb5244bb816db07782983b197d065610e40f0f9b","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-12-02T16:07:51.913973331Z","io.kubernetes.cri-o.Image":"c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.32.0","io.kubernetes.cri-o.ImageRef":"c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-running-upgrade-136818\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"1ba1d3cf2a4b6df642811bd2326b893f\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-running-upgrade-136818_1ba1d3cf2a4b6df642811bd2326b893f/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube
-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/043b20234f45aa9c720cc9109a5364484d15d9370bbb576096d5f1eed88c10f6/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-running-upgrade-136818_kube-system_1ba1d3cf2a4b6df642811bd2326b893f_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/effba5c89ba9a0f4077811a3531e632452986c518151590b36020b61a02d32f9/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"effba5c89ba9a0f4077811a3531e632452986c518151590b36020b61a02d32f9","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-running-upgrade-136818_kube-system_1ba1d3cf2a4b6df642811bd2326b893f_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/1ba1d3cf2a4b6df642811bd2326b893f/containers/kube-apiserver/3dcd7607\",\"read
only\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/1ba1d3cf2a4b6df642811bd2326b893f/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.po
d.name":"kube-apiserver-running-upgrade-136818","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"1ba1d3cf2a4b6df642811bd2326b893f","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.94.2:8443","kubernetes.io/config.hash":"1ba1d3cf2a4b6df642811bd2326b893f","kubernetes.io/config.seen":"2025-12-02T16:07:51.422586711Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"855c0ebf3bc431a4ca21a5e52e7b6c9867ade7c5db9b878c7587555d84b1bc07","pid":1391,"status":"running","bundle":"/run/containers/storage/overlay-containers/855c0ebf3bc431a4ca21a5e52e7b6c9867ade7c5db9b878c7587555d84b1bc07/userdata","rootfs":"/var/lib/containers/storage/overlay/9410d13aee82ad90a4483f54945e6342423e6583fb127d
17b19dada4d317936f/merged","created":"2025-12-02T16:07:52.004847589Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e68be80f","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e68be80f\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"855c0ebf3bc431a4ca21a5e52e7b6c9867ade7c5db9b878c7587555d84b1bc07","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-12-02T16:07:51.910375348Z","io.kubernetes.cri-o.Image":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","io.kubernetes.cri-o.ImageN
ame":"registry.k8s.io/etcd:3.5.16-0","io.kubernetes.cri-o.ImageRef":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-running-upgrade-136818\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"6030e2b29200be865f9696b591299ad5\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-running-upgrade-136818_6030e2b29200be865f9696b591299ad5/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9410d13aee82ad90a4483f54945e6342423e6583fb127d17b19dada4d317936f/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-running-upgrade-136818_kube-system_6030e2b29200be865f9696b591299ad5_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/269a5e7a6f62bbdd46dfae8bf3b9f5b1e5a15ce92015411e0514d9dca4caa8b0/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"269a5e7a
6f62bbdd46dfae8bf3b9f5b1e5a15ce92015411e0514d9dca4caa8b0","io.kubernetes.cri-o.SandboxName":"k8s_etcd-running-upgrade-136818_kube-system_6030e2b29200be865f9696b591299ad5_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/6030e2b29200be865f9696b591299ad5/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/6030e2b29200be865f9696b591299ad5/containers/etcd/82d7a8e2\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"pro
pagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-running-upgrade-136818","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"6030e2b29200be865f9696b591299ad5","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.94.2:2379","kubernetes.io/config.hash":"6030e2b29200be865f9696b591299ad5","kubernetes.io/config.seen":"2025-12-02T16:07:51.422583509Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"903da8d09c9e6cda86061dca786e6a99ceccaa240d6ae1caa073a6c8e2ddc710","pid":1414,"status":"running","bundle":"/run/containers/storage/overlay-containers/903da8d09c9e6cda86061dca786e6a99ceccaa240d6ae1caa073a6c8e2ddc710/userdata","rootfs":"/var/lib/containers/storage/overlay/64e3
7b1d137fddcff0afd0be893be14b859aebbee1f5199c5f079955e2ad8854/merged","created":"2025-12-02T16:07:52.015503671Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"8c4b12d6","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"8c4b12d6\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"903da8d09c9e6cda86061dca786e6a99ceccaa240d6ae1caa073a6c8e2ddc710","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-12-02T16:07:51.933247921Z","io.kubernetes.cri-o.Image":"a389e107f4ff1130c69849f0af08cbce9a1dfe3b7
c39874012587d233807cfc5","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.32.0","io.kubernetes.cri-o.ImageRef":"a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-running-upgrade-136818\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"c6d5dc30749655fbc404edf02e486cfd\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-running-upgrade-136818_c6d5dc30749655fbc404edf02e486cfd/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/64e37b1d137fddcff0afd0be893be14b859aebbee1f5199c5f079955e2ad8854/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-running-upgrade-136818_kube-system_c6d5dc30749655fbc404edf02e486cfd_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containe
rs/92506e23b0528ddd0771accd63d18e720bcfc96ba4c05a7cb6a0ede05e6caf6d/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"92506e23b0528ddd0771accd63d18e720bcfc96ba4c05a7cb6a0ede05e6caf6d","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-running-upgrade-136818_kube-system_c6d5dc30749655fbc404edf02e486cfd_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/c6d5dc30749655fbc404edf02e486cfd/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/c6d5dc30749655fbc404edf02e486cfd/containers/kube-scheduler/cbb3ce83\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"p
ropagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-running-upgrade-136818","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"c6d5dc30749655fbc404edf02e486cfd","kubernetes.io/config.hash":"c6d5dc30749655fbc404edf02e486cfd","kubernetes.io/config.seen":"2025-12-02T16:07:51.422588600Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b3cf1c8a165609837ab3dcc2484db211f1289eb9490c7384eed87b91d88282d8","pid":1400,"status":"running","bundle":"/run/containers/storage/overlay-containers/b3cf1c8a165609837ab3dcc2484db211f1289eb9490c7384eed87b91d88282d8/userdata","rootfs":"/var/lib/containers/storage/overlay/efddfd7a0dd968fc062cb9f55311672abdfaf164ec1d0056c62a923f61941d67/merged
","created":"2025-12-02T16:07:52.008235424Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"99f3a73e","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"99f3a73e\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"b3cf1c8a165609837ab3dcc2484db211f1289eb9490c7384eed87b91d88282d8","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-12-02T16:07:51.923060165Z","io.kubernetes.cri-o.Image":"8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3","io.kubernetes.cri-o.ImageName":"
registry.k8s.io/kube-controller-manager:v1.32.0","io.kubernetes.cri-o.ImageRef":"8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-running-upgrade-136818\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"b0e6c322806f264493e567f0fb779c4e\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-running-upgrade-136818_b0e6c322806f264493e567f0fb779c4e/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/efddfd7a0dd968fc062cb9f55311672abdfaf164ec1d0056c62a923f61941d67/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-running-upgrade-136818_kube-system_b0e6c322806f264493e567f0fb779c4e_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/ov
erlay-containers/c5b6912aa33aba924cd88ac7a7d854a164efde95bfc50a0f280f2d434e5a1fa4/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"c5b6912aa33aba924cd88ac7a7d854a164efde95bfc50a0f280f2d434e5a1fa4","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-running-upgrade-136818_kube-system_b0e6c322806f264493e567f0fb779c4e_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/b0e6c322806f264493e567f0fb779c4e/containers/kube-controller-manager/bdda7d31\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/b0e6c322806f264493e567f0fb779c4e/etc-hosts\",\"readonly
\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"
propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-running-upgrade-136818","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b0e6c322806f264493e567f0fb779c4e","kubernetes.io/config.hash":"b0e6c322806f264493e567f0fb779c4e","kubernetes.io/config.seen":"2025-12-02T16:07:51.422587689Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"}]
	I1202 16:08:04.101538  472154 cri.go:126] list returned 4 containers
	I1202 16:08:04.101561  472154 cri.go:129] container: {ID:498eb449fe529c0dee1e99f4fb5244bb816db07782983b197d065610e40f0f9b Status:running}
	I1202 16:08:04.101605  472154 cri.go:135] skipping {498eb449fe529c0dee1e99f4fb5244bb816db07782983b197d065610e40f0f9b running}: state = "running", want "paused"
	I1202 16:08:04.101622  472154 cri.go:129] container: {ID:855c0ebf3bc431a4ca21a5e52e7b6c9867ade7c5db9b878c7587555d84b1bc07 Status:running}
	I1202 16:08:04.101630  472154 cri.go:135] skipping {855c0ebf3bc431a4ca21a5e52e7b6c9867ade7c5db9b878c7587555d84b1bc07 running}: state = "running", want "paused"
	I1202 16:08:04.101638  472154 cri.go:129] container: {ID:903da8d09c9e6cda86061dca786e6a99ceccaa240d6ae1caa073a6c8e2ddc710 Status:running}
	I1202 16:08:04.101647  472154 cri.go:135] skipping {903da8d09c9e6cda86061dca786e6a99ceccaa240d6ae1caa073a6c8e2ddc710 running}: state = "running", want "paused"
	I1202 16:08:04.101661  472154 cri.go:129] container: {ID:b3cf1c8a165609837ab3dcc2484db211f1289eb9490c7384eed87b91d88282d8 Status:running}
	I1202 16:08:04.101672  472154 cri.go:135] skipping {b3cf1c8a165609837ab3dcc2484db211f1289eb9490c7384eed87b91d88282d8 running}: state = "running", want "paused"
	I1202 16:08:04.101728  472154 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 16:08:04.112158  472154 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 16:08:04.112182  472154 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 16:08:04.112231  472154 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 16:08:04.122569  472154 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 16:08:04.123111  472154 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-136818" does not appear in /home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 16:08:04.123347  472154 kubeconfig.go:62] /home/jenkins/minikube-integration/22021-264555/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-136818" cluster setting kubeconfig missing "running-upgrade-136818" context setting]
	I1202 16:08:04.123818  472154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/kubeconfig: {Name:mk809d3f43352510256b48d000241cc8ee13f80d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:08:04.181776  472154 kapi.go:59] client config for running-upgrade-136818: &rest.Config{Host:"https://192.168.94.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-264555/.minikube/profiles/running-upgrade-136818/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-264555/.minikube/profiles/running-upgrade-136818/client.key", CAFile:"/home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2815480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 16:08:04.182189  472154 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1202 16:08:04.182204  472154 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1202 16:08:04.182209  472154 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1202 16:08:04.182213  472154 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1202 16:08:04.182217  472154 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1202 16:08:04.182736  472154 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 16:08:04.195187  472154 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-02 16:07:47.631970009 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-02 16:08:03.365177853 +0000
	@@ -41,9 +41,6 @@
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      - name: "proxy-refresh-interval"
	-        value: "70000"
	 kubernetesVersion: v1.32.0
	 networking:
	   dnsDomain: cluster.local
	
	-- /stdout --
	I1202 16:08:04.195212  472154 kubeadm.go:1161] stopping kube-system containers ...
	I1202 16:08:04.195228  472154 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1202 16:08:04.195285  472154 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 16:08:04.234505  472154 cri.go:89] found id: "903da8d09c9e6cda86061dca786e6a99ceccaa240d6ae1caa073a6c8e2ddc710"
	I1202 16:08:04.234531  472154 cri.go:89] found id: "b3cf1c8a165609837ab3dcc2484db211f1289eb9490c7384eed87b91d88282d8"
	I1202 16:08:04.234537  472154 cri.go:89] found id: "498eb449fe529c0dee1e99f4fb5244bb816db07782983b197d065610e40f0f9b"
	I1202 16:08:04.234541  472154 cri.go:89] found id: "855c0ebf3bc431a4ca21a5e52e7b6c9867ade7c5db9b878c7587555d84b1bc07"
	I1202 16:08:04.234549  472154 cri.go:89] found id: ""
	I1202 16:08:04.234556  472154 cri.go:252] Stopping containers: [903da8d09c9e6cda86061dca786e6a99ceccaa240d6ae1caa073a6c8e2ddc710 b3cf1c8a165609837ab3dcc2484db211f1289eb9490c7384eed87b91d88282d8 498eb449fe529c0dee1e99f4fb5244bb816db07782983b197d065610e40f0f9b 855c0ebf3bc431a4ca21a5e52e7b6c9867ade7c5db9b878c7587555d84b1bc07]
	I1202 16:08:04.234620  472154 ssh_runner.go:195] Run: which crictl
	I1202 16:08:04.238597  472154 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 903da8d09c9e6cda86061dca786e6a99ceccaa240d6ae1caa073a6c8e2ddc710 b3cf1c8a165609837ab3dcc2484db211f1289eb9490c7384eed87b91d88282d8 498eb449fe529c0dee1e99f4fb5244bb816db07782983b197d065610e40f0f9b 855c0ebf3bc431a4ca21a5e52e7b6c9867ade7c5db9b878c7587555d84b1bc07
	I1202 16:08:08.424935  472164 cli_runner.go:164] Run: docker network inspect missing-upgrade-881462 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 16:08:08.447239  472164 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1202 16:08:08.451700  472164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 16:08:08.464689  472164 kubeadm.go:883] updating cluster {Name:missing-upgrade-881462 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-881462 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 16:08:08.464827  472164 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1202 16:08:08.464887  472164 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 16:08:08.556304  472164 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 16:08:08.556321  472164 crio.go:433] Images already preloaded, skipping extraction
	I1202 16:08:08.556383  472164 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 16:08:08.595970  472164 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 16:08:08.595985  472164 cache_images.go:84] Images are preloaded, skipping loading
	I1202 16:08:08.595993  472164 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.32.0 crio true true} ...
	I1202 16:08:08.596104  472164 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=missing-upgrade-881462 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-881462 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 16:08:08.596171  472164 ssh_runner.go:195] Run: crio config
	I1202 16:08:08.644903  472164 cni.go:84] Creating CNI manager for ""
	I1202 16:08:08.644915  472164 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 16:08:08.644924  472164 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1202 16:08:08.644944  472164 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:missing-upgrade-881462 NodeName:missing-upgrade-881462 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 16:08:08.645083  472164 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "missing-upgrade-881462"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 16:08:08.645139  472164 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1202 16:08:08.655937  472164 binaries.go:44] Found k8s binaries, skipping transfer
	I1202 16:08:08.656009  472164 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 16:08:08.665704  472164 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1202 16:08:08.686546  472164 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 16:08:08.710808  472164 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1202 16:08:08.731488  472164 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1202 16:08:08.735703  472164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 16:08:08.748326  472164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 16:08:08.815213  472164 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 16:08:08.842610  472164 certs.go:68] Setting up /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462 for IP: 192.168.76.2
	I1202 16:08:08.842629  472164 certs.go:194] generating shared ca certs ...
	I1202 16:08:08.842651  472164 certs.go:226] acquiring lock for ca certs: {Name:mk039ff27816ff98157f54038cc23b17e408fc34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:08:08.842821  472164 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-264555/.minikube/ca.key
	I1202 16:08:08.842874  472164 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.key
	I1202 16:08:08.842883  472164 certs.go:256] generating profile certs ...
	I1202 16:08:08.842956  472164 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/client.key
	I1202 16:08:08.842979  472164 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/client.crt with IP's: []
	I1202 16:08:08.084585  474687 cli_runner.go:164] Run: docker network inspect pause-907557 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 16:08:08.110043  474687 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1202 16:08:08.116234  474687 kubeadm.go:884] updating cluster {Name:pause-907557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-907557 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 16:08:08.116572  474687 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 16:08:08.116638  474687 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 16:08:08.154766  474687 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 16:08:08.154789  474687 crio.go:433] Images already preloaded, skipping extraction
	I1202 16:08:08.154847  474687 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 16:08:08.185342  474687 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 16:08:08.185364  474687 cache_images.go:86] Images are preloaded, skipping loading
	I1202 16:08:08.185373  474687 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1202 16:08:08.185533  474687 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-907557 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-907557 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 16:08:08.185630  474687 ssh_runner.go:195] Run: crio config
	I1202 16:08:08.238539  474687 cni.go:84] Creating CNI manager for ""
	I1202 16:08:08.238561  474687 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 16:08:08.238575  474687 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 16:08:08.238597  474687 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-907557 NodeName:pause-907557 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 16:08:08.238726  474687 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-907557"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 16:08:08.238795  474687 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 16:08:08.248934  474687 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 16:08:08.248991  474687 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 16:08:08.257721  474687 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1202 16:08:08.273331  474687 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 16:08:08.289452  474687 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1202 16:08:08.304194  474687 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1202 16:08:08.309148  474687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 16:08:08.428243  474687 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 16:08:08.445729  474687 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/pause-907557 for IP: 192.168.85.2
	I1202 16:08:08.445750  474687 certs.go:195] generating shared ca certs ...
	I1202 16:08:08.445771  474687 certs.go:227] acquiring lock for ca certs: {Name:mk039ff27816ff98157f54038cc23b17e408fc34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:08:08.445933  474687 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-264555/.minikube/ca.key
	I1202 16:08:08.445992  474687 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.key
	I1202 16:08:08.446008  474687 certs.go:257] generating profile certs ...
	I1202 16:08:08.446122  474687 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/pause-907557/client.key
	I1202 16:08:08.446191  474687 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/pause-907557/apiserver.key.7fab9d41
	I1202 16:08:08.446244  474687 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/pause-907557/proxy-client.key
	I1202 16:08:08.446382  474687 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099.pem (1338 bytes)
	W1202 16:08:08.446453  474687 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099_empty.pem, impossibly tiny 0 bytes
	I1202 16:08:08.446468  474687 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem (1679 bytes)
	I1202 16:08:08.446508  474687 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem (1082 bytes)
	I1202 16:08:08.446551  474687 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem (1123 bytes)
	I1202 16:08:08.446590  474687 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem (1675 bytes)
	I1202 16:08:08.446664  474687 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem (1708 bytes)
	I1202 16:08:08.447352  474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 16:08:08.468682  474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 16:08:08.490407  474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 16:08:08.510792  474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 16:08:08.529893  474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/pause-907557/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1202 16:08:08.549638  474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/pause-907557/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 16:08:08.569673  474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/pause-907557/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 16:08:08.590413  474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/pause-907557/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 16:08:08.611782  474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem --> /usr/share/ca-certificates/2680992.pem (1708 bytes)
	I1202 16:08:08.632273  474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 16:08:08.652925  474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099.pem --> /usr/share/ca-certificates/268099.pem (1338 bytes)
	I1202 16:08:08.672220  474687 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 16:08:08.687006  474687 ssh_runner.go:195] Run: openssl version
	I1202 16:08:08.693871  474687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2680992.pem && ln -fs /usr/share/ca-certificates/2680992.pem /etc/ssl/certs/2680992.pem"
	I1202 16:08:08.702892  474687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2680992.pem
	I1202 16:08:08.707221  474687 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 15:33 /usr/share/ca-certificates/2680992.pem
	I1202 16:08:08.707289  474687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2680992.pem
	I1202 16:08:08.744793  474687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2680992.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 16:08:08.753835  474687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 16:08:08.763880  474687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:08:08.768860  474687 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 15:16 /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:08:08.768930  474687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:08:08.808826  474687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 16:08:08.817883  474687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/268099.pem && ln -fs /usr/share/ca-certificates/268099.pem /etc/ssl/certs/268099.pem"
	I1202 16:08:08.826971  474687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/268099.pem
	I1202 16:08:08.831227  474687 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 15:33 /usr/share/ca-certificates/268099.pem
	I1202 16:08:08.831288  474687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/268099.pem
	I1202 16:08:08.878371  474687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/268099.pem /etc/ssl/certs/51391683.0"
	I1202 16:08:08.889105  474687 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 16:08:08.894003  474687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 16:08:08.930584  474687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 16:08:08.967913  474687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 16:08:09.005637  474687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 16:08:09.042326  474687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 16:08:09.078094  474687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 16:08:09.117708  474687 kubeadm.go:401] StartCluster: {Name:pause-907557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-907557 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 16:08:09.117856  474687 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 16:08:09.117916  474687 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 16:08:09.149043  474687 cri.go:89] found id: "b2836aceeb8807e0993320e05f6aa6c4be7c30aaaa190092f8e98f5f7dd646ec"
	I1202 16:08:09.149084  474687 cri.go:89] found id: "34828dad597db079c97a036969df0740139e6fd38885ad5627968129aef7c2b3"
	I1202 16:08:09.149093  474687 cri.go:89] found id: "586f014c53211c1af9d8288055382380c3d51998056d288238f813c46118b641"
	I1202 16:08:09.149101  474687 cri.go:89] found id: "1ac7ddf9843eebd770bec15da5164025aa9877f89ae53a56ffdd6e14a093fe56"
	I1202 16:08:09.149108  474687 cri.go:89] found id: "7cc002479c3d20848066c689b18ebdf1db75e87f1c451b1526e550789e7a63fa"
	I1202 16:08:09.149114  474687 cri.go:89] found id: "cdfe7eda529156977893291247b97065289958fe65cbac19931af954d1f7e904"
	I1202 16:08:09.149120  474687 cri.go:89] found id: "132312565fa9df9459ca2fab422a4a035d2dd56ac519dec4d9ca9c4397bc628b"
	I1202 16:08:09.149123  474687 cri.go:89] found id: ""
	I1202 16:08:09.149180  474687 ssh_runner.go:195] Run: sudo runc list -f json
	W1202 16:08:09.163029  474687 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T16:08:09Z" level=error msg="open /run/runc: no such file or directory"
	I1202 16:08:09.163100  474687 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 16:08:09.171509  474687 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 16:08:09.171533  474687 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 16:08:09.171581  474687 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 16:08:09.180968  474687 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 16:08:09.182066  474687 kubeconfig.go:125] found "pause-907557" server: "https://192.168.85.2:8443"
	I1202 16:08:09.183465  474687 kapi.go:59] client config for pause-907557: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-264555/.minikube/profiles/pause-907557/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-264555/.minikube/profiles/pause-907557/client.key", CAFile:"/home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2815480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 16:08:09.184033  474687 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1202 16:08:09.184052  474687 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1202 16:08:09.184059  474687 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1202 16:08:09.184065  474687 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1202 16:08:09.184070  474687 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1202 16:08:09.184480  474687 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 16:08:09.197588  474687 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1202 16:08:09.197630  474687 kubeadm.go:602] duration metric: took 26.090944ms to restartPrimaryControlPlane
	I1202 16:08:09.197640  474687 kubeadm.go:403] duration metric: took 79.945856ms to StartCluster
	I1202 16:08:09.197658  474687 settings.go:142] acquiring lock: {Name:mkb00b5395affa5a80ee09f21cfed53b1afcd59c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:08:09.197742  474687 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 16:08:09.198754  474687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/kubeconfig: {Name:mk809d3f43352510256b48d000241cc8ee13f80d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:08:09.199011  474687 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 16:08:09.199122  474687 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 16:08:09.199248  474687 config.go:182] Loaded profile config "pause-907557": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 16:08:09.201711  474687 out.go:179] * Enabled addons: 
	I1202 16:08:09.201720  474687 out.go:179] * Verifying Kubernetes components...
	I1202 16:08:09.202912  474687 addons.go:530] duration metric: took 3.794378ms for enable addons: enabled=[]
	I1202 16:08:09.202957  474687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 16:08:09.334964  474687 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 16:08:09.350715  474687 node_ready.go:35] waiting up to 6m0s for node "pause-907557" to be "Ready" ...
	I1202 16:08:09.361396  474687 node_ready.go:49] node "pause-907557" is "Ready"
	I1202 16:08:09.361709  474687 node_ready.go:38] duration metric: took 10.951637ms for node "pause-907557" to be "Ready" ...
	I1202 16:08:09.361774  474687 api_server.go:52] waiting for apiserver process to appear ...
	I1202 16:08:09.361860  474687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 16:08:09.379150  474687 api_server.go:72] duration metric: took 180.016104ms to wait for apiserver process to appear ...
	I1202 16:08:09.379240  474687 api_server.go:88] waiting for apiserver healthz status ...
	I1202 16:08:09.379280  474687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1202 16:08:09.387363  474687 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1202 16:08:09.388587  474687 api_server.go:141] control plane version: v1.34.2
	I1202 16:08:09.388619  474687 api_server.go:131] duration metric: took 9.359692ms to wait for apiserver health ...
	I1202 16:08:09.388630  474687 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 16:08:09.391723  474687 system_pods.go:59] 7 kube-system pods found
	I1202 16:08:09.391753  474687 system_pods.go:61] "coredns-66bc5c9577-ckjzv" [41952b1f-3ef9-414d-99f6-b4d638903867] Running
	I1202 16:08:09.391760  474687 system_pods.go:61] "etcd-pause-907557" [321b3b9b-6fd8-4e31-affc-aa795a64994b] Running
	I1202 16:08:09.391764  474687 system_pods.go:61] "kindnet-svk5r" [6a32f68e-4724-4380-8045-ca504c4294c9] Running
	I1202 16:08:09.391769  474687 system_pods.go:61] "kube-apiserver-pause-907557" [fb99da0c-34e5-4b60-bdcb-5211eb9bf260] Running
	I1202 16:08:09.391774  474687 system_pods.go:61] "kube-controller-manager-pause-907557" [ae7b492e-cf03-466c-ab30-2797fdbc1202] Running
	I1202 16:08:09.391783  474687 system_pods.go:61] "kube-proxy-6wbvh" [402e8b88-66d2-4e4f-b0e5-693b8e9ee4b7] Running
	I1202 16:08:09.391795  474687 system_pods.go:61] "kube-scheduler-pause-907557" [49ed68b4-af67-402e-8473-87079a43e9b0] Running
	I1202 16:08:09.391802  474687 system_pods.go:74] duration metric: took 3.165489ms to wait for pod list to return data ...
	I1202 16:08:09.391817  474687 default_sa.go:34] waiting for default service account to be created ...
	I1202 16:08:09.393791  474687 default_sa.go:45] found service account: "default"
	I1202 16:08:09.393815  474687 default_sa.go:55] duration metric: took 1.989216ms for default service account to be created ...
	I1202 16:08:09.393825  474687 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 16:08:09.396312  474687 system_pods.go:86] 7 kube-system pods found
	I1202 16:08:09.396335  474687 system_pods.go:89] "coredns-66bc5c9577-ckjzv" [41952b1f-3ef9-414d-99f6-b4d638903867] Running
	I1202 16:08:09.396341  474687 system_pods.go:89] "etcd-pause-907557" [321b3b9b-6fd8-4e31-affc-aa795a64994b] Running
	I1202 16:08:09.396344  474687 system_pods.go:89] "kindnet-svk5r" [6a32f68e-4724-4380-8045-ca504c4294c9] Running
	I1202 16:08:09.396348  474687 system_pods.go:89] "kube-apiserver-pause-907557" [fb99da0c-34e5-4b60-bdcb-5211eb9bf260] Running
	I1202 16:08:09.396351  474687 system_pods.go:89] "kube-controller-manager-pause-907557" [ae7b492e-cf03-466c-ab30-2797fdbc1202] Running
	I1202 16:08:09.396355  474687 system_pods.go:89] "kube-proxy-6wbvh" [402e8b88-66d2-4e4f-b0e5-693b8e9ee4b7] Running
	I1202 16:08:09.396358  474687 system_pods.go:89] "kube-scheduler-pause-907557" [49ed68b4-af67-402e-8473-87079a43e9b0] Running
	I1202 16:08:09.396363  474687 system_pods.go:126] duration metric: took 2.53312ms to wait for k8s-apps to be running ...
	I1202 16:08:09.396369  474687 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 16:08:09.396413  474687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 16:08:09.411891  474687 system_svc.go:56] duration metric: took 15.508976ms WaitForService to wait for kubelet
	I1202 16:08:09.411931  474687 kubeadm.go:587] duration metric: took 212.882843ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 16:08:09.411961  474687 node_conditions.go:102] verifying NodePressure condition ...
	I1202 16:08:09.415055  474687 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 16:08:09.415086  474687 node_conditions.go:123] node cpu capacity is 8
	I1202 16:08:09.415113  474687 node_conditions.go:105] duration metric: took 3.144417ms to run NodePressure ...
	I1202 16:08:09.415131  474687 start.go:242] waiting for startup goroutines ...
	I1202 16:08:09.415142  474687 start.go:247] waiting for cluster config update ...
	I1202 16:08:09.415156  474687 start.go:256] writing updated cluster config ...
	I1202 16:08:09.415536  474687 ssh_runner.go:195] Run: rm -f paused
	I1202 16:08:09.419407  474687 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 16:08:09.420022  474687 kapi.go:59] client config for pause-907557: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-264555/.minikube/profiles/pause-907557/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-264555/.minikube/profiles/pause-907557/client.key", CAFile:"/home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2815480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 16:08:09.422854  474687 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ckjzv" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:08:09.426921  474687 pod_ready.go:94] pod "coredns-66bc5c9577-ckjzv" is "Ready"
	I1202 16:08:09.426955  474687 pod_ready.go:86] duration metric: took 4.077844ms for pod "coredns-66bc5c9577-ckjzv" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:08:09.429146  474687 pod_ready.go:83] waiting for pod "etcd-pause-907557" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:08:09.432844  474687 pod_ready.go:94] pod "etcd-pause-907557" is "Ready"
	I1202 16:08:09.432867  474687 pod_ready.go:86] duration metric: took 3.697806ms for pod "etcd-pause-907557" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:08:09.434754  474687 pod_ready.go:83] waiting for pod "kube-apiserver-pause-907557" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:08:09.438207  474687 pod_ready.go:94] pod "kube-apiserver-pause-907557" is "Ready"
	I1202 16:08:09.438229  474687 pod_ready.go:86] duration metric: took 3.451466ms for pod "kube-apiserver-pause-907557" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:08:09.440160  474687 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-907557" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:08:09.064399  472164 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/client.crt ...
	I1202 16:08:09.064432  472164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/client.crt: {Name:mke2d670641a9d4bc809de9f6a3fdd72fd1842f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:08:09.064652  472164 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/client.key ...
	I1202 16:08:09.064666  472164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/client.key: {Name:mkabe585a0e7e4028b3beefc9e9bc1c4b31bc7af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:08:09.064764  472164 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/apiserver.key.f7728d12
	I1202 16:08:09.064776  472164 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/apiserver.crt.f7728d12 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1202 16:08:09.333221  472164 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/apiserver.crt.f7728d12 ...
	I1202 16:08:09.333240  472164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/apiserver.crt.f7728d12: {Name:mk47f5ee4254c7b1fe9ef36b30cab9d3b7a75ac6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:08:09.333460  472164 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/apiserver.key.f7728d12 ...
	I1202 16:08:09.333480  472164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/apiserver.key.f7728d12: {Name:mkd67552487d29c5b368464c7f77c283d785a645 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:08:09.333606  472164 certs.go:381] copying /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/apiserver.crt.f7728d12 -> /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/apiserver.crt
	I1202 16:08:09.333733  472164 certs.go:385] copying /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/apiserver.key.f7728d12 -> /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/apiserver.key
	I1202 16:08:09.333825  472164 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/proxy-client.key
	I1202 16:08:09.333840  472164 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/proxy-client.crt with IP's: []
	I1202 16:08:09.432175  472164 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/proxy-client.crt ...
	I1202 16:08:09.432207  472164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/proxy-client.crt: {Name:mk3ba9ddcf274888a21306742942b9f32fae4cc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:08:09.432397  472164 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/proxy-client.key ...
	I1202 16:08:09.432408  472164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/proxy-client.key: {Name:mke7b071c2844210a0f644e5ce0b8222208bdae6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:08:09.432672  472164 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099.pem (1338 bytes)
	W1202 16:08:09.432724  472164 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099_empty.pem, impossibly tiny 0 bytes
	I1202 16:08:09.432733  472164 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem (1679 bytes)
	I1202 16:08:09.432758  472164 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem (1082 bytes)
	I1202 16:08:09.432779  472164 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem (1123 bytes)
	I1202 16:08:09.432797  472164 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem (1675 bytes)
	I1202 16:08:09.432831  472164 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem (1708 bytes)
	I1202 16:08:09.433458  472164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 16:08:09.461489  472164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 16:08:09.489607  472164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 16:08:09.515584  472164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 16:08:09.541143  472164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1202 16:08:09.567627  472164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 16:08:09.593852  472164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 16:08:09.620453  472164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 16:08:09.647590  472164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099.pem --> /usr/share/ca-certificates/268099.pem (1338 bytes)
	I1202 16:08:09.677139  472164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem --> /usr/share/ca-certificates/2680992.pem (1708 bytes)
	I1202 16:08:09.703481  472164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 16:08:09.729671  472164 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 16:08:09.749727  472164 ssh_runner.go:195] Run: openssl version
	I1202 16:08:09.756557  472164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 16:08:09.767855  472164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:08:09.772035  472164 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 15:16 /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:08:09.772085  472164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:08:09.780186  472164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 16:08:09.793296  472164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/268099.pem && ln -fs /usr/share/ca-certificates/268099.pem /etc/ssl/certs/268099.pem"
	I1202 16:08:09.804723  472164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/268099.pem
	I1202 16:08:09.809249  472164 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 15:33 /usr/share/ca-certificates/268099.pem
	I1202 16:08:09.809309  472164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/268099.pem
	I1202 16:08:09.817447  472164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/268099.pem /etc/ssl/certs/51391683.0"
	I1202 16:08:09.830177  472164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2680992.pem && ln -fs /usr/share/ca-certificates/2680992.pem /etc/ssl/certs/2680992.pem"
	I1202 16:08:09.843761  472164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2680992.pem
	I1202 16:08:09.848051  472164 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 15:33 /usr/share/ca-certificates/2680992.pem
	I1202 16:08:09.848117  472164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2680992.pem
	I1202 16:08:09.856124  472164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2680992.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 16:08:09.870676  472164 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 16:08:09.874962  472164 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 16:08:09.875019  472164 kubeadm.go:392] StartCluster: {Name:missing-upgrade-881462 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-881462 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1202 16:08:09.875115  472164 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 16:08:09.875169  472164 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 16:08:09.918559  472164 cri.go:89] found id: ""
	I1202 16:08:09.918624  472164 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 16:08:09.933321  472164 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 16:08:09.944227  472164 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1202 16:08:09.944287  472164 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 16:08:09.954972  472164 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 16:08:09.954985  472164 kubeadm.go:157] found existing configuration files:
	
	I1202 16:08:09.955033  472164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 16:08:09.967009  472164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 16:08:09.967052  472164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 16:08:09.977493  472164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 16:08:09.988047  472164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 16:08:09.988103  472164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 16:08:09.997753  472164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 16:08:10.008366  472164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 16:08:10.008443  472164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 16:08:10.018324  472164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 16:08:10.029518  472164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 16:08:10.029574  472164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 16:08:10.039576  472164 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 16:08:10.085990  472164 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I1202 16:08:10.086084  472164 kubeadm.go:310] [preflight] Running pre-flight checks
	I1202 16:08:10.107476  472164 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1202 16:08:10.107590  472164 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1044-gcp
	I1202 16:08:10.107635  472164 kubeadm.go:310] OS: Linux
	I1202 16:08:10.107775  472164 kubeadm.go:310] CGROUPS_CPU: enabled
	I1202 16:08:10.107838  472164 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1202 16:08:10.107904  472164 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1202 16:08:10.108013  472164 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1202 16:08:10.108087  472164 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1202 16:08:10.108163  472164 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1202 16:08:10.108237  472164 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1202 16:08:10.108304  472164 kubeadm.go:310] CGROUPS_IO: enabled
	I1202 16:08:10.177067  472164 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 16:08:10.177214  472164 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 16:08:10.177364  472164 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 16:08:10.186403  472164 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 16:08:07.012064  473081 cli_runner.go:164] Run: docker network inspect stopped-upgrade-937293 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 16:08:07.032127  473081 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1202 16:08:07.036635  473081 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 16:08:07.050647  473081 kubeadm.go:884] updating cluster {Name:stopped-upgrade-937293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:stopped-upgrade-937293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVM
netClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 16:08:07.050776  473081 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1202 16:08:07.050832  473081 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 16:08:07.107196  473081 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 16:08:07.107226  473081 crio.go:433] Images already preloaded, skipping extraction
	I1202 16:08:07.107292  473081 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 16:08:07.146500  473081 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 16:08:07.146526  473081 cache_images.go:86] Images are preloaded, skipping loading
	I1202 16:08:07.146536  473081 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.32.0 crio true true} ...
	I1202 16:08:07.146663  473081 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=stopped-upgrade-937293 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:stopped-upgrade-937293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 16:08:07.146792  473081 ssh_runner.go:195] Run: crio config
	I1202 16:08:07.195524  473081 cni.go:84] Creating CNI manager for ""
	I1202 16:08:07.195546  473081 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 16:08:07.195567  473081 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 16:08:07.195598  473081 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-937293 NodeName:stopped-upgrade-937293 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 16:08:07.195753  473081 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "stopped-upgrade-937293"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 16:08:07.195826  473081 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1202 16:08:07.206090  473081 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 16:08:07.206165  473081 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 16:08:07.215812  473081 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1202 16:08:07.240879  473081 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 16:08:07.259781  473081 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1202 16:08:07.280525  473081 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1202 16:08:07.284809  473081 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 16:08:07.297256  473081 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 16:08:07.374154  473081 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 16:08:07.394335  473081 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/stopped-upgrade-937293 for IP: 192.168.103.2
	I1202 16:08:07.394355  473081 certs.go:195] generating shared ca certs ...
	I1202 16:08:07.394377  473081 certs.go:227] acquiring lock for ca certs: {Name:mk039ff27816ff98157f54038cc23b17e408fc34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:08:07.394641  473081 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-264555/.minikube/ca.key
	I1202 16:08:07.394702  473081 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.key
	I1202 16:08:07.394717  473081 certs.go:257] generating profile certs ...
	I1202 16:08:07.394882  473081 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/stopped-upgrade-937293/client.key
	I1202 16:08:07.394976  473081 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/stopped-upgrade-937293/apiserver.key.083656e0
	I1202 16:08:07.395030  473081 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/stopped-upgrade-937293/proxy-client.key
	I1202 16:08:07.395175  473081 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099.pem (1338 bytes)
	W1202 16:08:07.395220  473081 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099_empty.pem, impossibly tiny 0 bytes
	I1202 16:08:07.395233  473081 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem (1679 bytes)
	I1202 16:08:07.395269  473081 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem (1082 bytes)
	I1202 16:08:07.395305  473081 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem (1123 bytes)
	I1202 16:08:07.395339  473081 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem (1675 bytes)
	I1202 16:08:07.395399  473081 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem (1708 bytes)
	I1202 16:08:07.396168  473081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 16:08:07.426258  473081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 16:08:07.463510  473081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 16:08:07.504377  473081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 16:08:07.532886  473081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/stopped-upgrade-937293/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1202 16:08:07.565526  473081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/stopped-upgrade-937293/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 16:08:07.593591  473081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/stopped-upgrade-937293/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 16:08:07.623126  473081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/stopped-upgrade-937293/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 16:08:07.659804  473081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 16:08:07.691915  473081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099.pem --> /usr/share/ca-certificates/268099.pem (1338 bytes)
	I1202 16:08:07.717909  473081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem --> /usr/share/ca-certificates/2680992.pem (1708 bytes)
	I1202 16:08:07.749606  473081 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 16:08:07.771662  473081 ssh_runner.go:195] Run: openssl version
	I1202 16:08:07.777784  473081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/268099.pem && ln -fs /usr/share/ca-certificates/268099.pem /etc/ssl/certs/268099.pem"
	I1202 16:08:07.789885  473081 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/268099.pem
	I1202 16:08:07.794230  473081 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 15:33 /usr/share/ca-certificates/268099.pem
	I1202 16:08:07.794286  473081 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/268099.pem
	I1202 16:08:07.802324  473081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/268099.pem /etc/ssl/certs/51391683.0"
	I1202 16:08:07.813053  473081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2680992.pem && ln -fs /usr/share/ca-certificates/2680992.pem /etc/ssl/certs/2680992.pem"
	I1202 16:08:07.823680  473081 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2680992.pem
	I1202 16:08:07.828127  473081 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 15:33 /usr/share/ca-certificates/2680992.pem
	I1202 16:08:07.828204  473081 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2680992.pem
	I1202 16:08:07.836527  473081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2680992.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 16:08:07.848505  473081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 16:08:07.860247  473081 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:08:07.863893  473081 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 15:16 /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:08:07.863955  473081 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:08:07.871317  473081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 16:08:07.882120  473081 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 16:08:07.886271  473081 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 16:08:07.894832  473081 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 16:08:07.902829  473081 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 16:08:07.911331  473081 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 16:08:07.920231  473081 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 16:08:07.929163  473081 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 16:08:07.936643  473081 kubeadm.go:401] StartCluster: {Name:stopped-upgrade-937293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:stopped-upgrade-937293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 16:08:07.936753  473081 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 16:08:07.936814  473081 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 16:08:07.984920  473081 cri.go:89] found id: ""
	I1202 16:08:07.984996  473081 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 16:08:07.997625  473081 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 16:08:07.997655  473081 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 16:08:07.997713  473081 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 16:08:08.011908  473081 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 16:08:08.012809  473081 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-937293" does not appear in /home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 16:08:08.013316  473081 kubeconfig.go:62] /home/jenkins/minikube-integration/22021-264555/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-937293" cluster setting kubeconfig missing "stopped-upgrade-937293" context setting]
	I1202 16:08:08.014051  473081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/kubeconfig: {Name:mk809d3f43352510256b48d000241cc8ee13f80d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:08:08.015017  473081 kapi.go:59] client config for stopped-upgrade-937293: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-264555/.minikube/profiles/stopped-upgrade-937293/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-264555/.minikube/profiles/stopped-upgrade-937293/client.key", CAFile:"/home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2815480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 16:08:08.015590  473081 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1202 16:08:08.015623  473081 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1202 16:08:08.015631  473081 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1202 16:08:08.015638  473081 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1202 16:08:08.015645  473081 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1202 16:08:08.016085  473081 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 16:08:08.030122  473081 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-02 16:07:47.360949203 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-02 16:08:07.276478124 +0000
	@@ -41,9 +41,6 @@
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      - name: "proxy-refresh-interval"
	-        value: "70000"
	 kubernetesVersion: v1.32.0
	 networking:
	   dnsDomain: cluster.local
	
	-- /stdout --
	I1202 16:08:08.030146  473081 kubeadm.go:1161] stopping kube-system containers ...
	I1202 16:08:08.030164  473081 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1202 16:08:08.030228  473081 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 16:08:08.098762  473081 cri.go:89] found id: ""
	I1202 16:08:08.098852  473081 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1202 16:08:08.128146  473081 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 16:08:08.138778  473081 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5647 Dec  2 16:07 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Dec  2 16:07 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Dec  2 16:07 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5605 Dec  2 16:07 /etc/kubernetes/scheduler.conf
	
	I1202 16:08:08.138846  473081 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 16:08:08.151756  473081 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 16:08:08.162187  473081 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 16:08:08.172103  473081 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 16:08:08.172169  473081 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 16:08:08.183811  473081 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 16:08:08.195655  473081 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 16:08:08.195727  473081 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 16:08:08.206765  473081 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 16:08:08.217387  473081 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 16:08:08.269097  473081 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 16:08:09.096332  473081 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1202 16:08:09.284164  473081 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 16:08:09.350270  473081 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1202 16:08:09.415031  473081 api_server.go:52] waiting for apiserver process to appear ...
	I1202 16:08:09.415108  473081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 16:08:09.915653  473081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 16:08:09.932070  473081 api_server.go:72] duration metric: took 517.042838ms to wait for apiserver process to appear ...
	I1202 16:08:09.932112  473081 api_server.go:88] waiting for apiserver healthz status ...
	I1202 16:08:09.932143  473081 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1202 16:08:09.932577  473081 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1202 16:08:09.822995  474687 pod_ready.go:94] pod "kube-controller-manager-pause-907557" is "Ready"
	I1202 16:08:09.823028  474687 pod_ready.go:86] duration metric: took 382.842274ms for pod "kube-controller-manager-pause-907557" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:08:10.024162  474687 pod_ready.go:83] waiting for pod "kube-proxy-6wbvh" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:08:10.423292  474687 pod_ready.go:94] pod "kube-proxy-6wbvh" is "Ready"
	I1202 16:08:10.423327  474687 pod_ready.go:86] duration metric: took 399.132522ms for pod "kube-proxy-6wbvh" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:08:10.623539  474687 pod_ready.go:83] waiting for pod "kube-scheduler-pause-907557" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:08:11.024109  474687 pod_ready.go:94] pod "kube-scheduler-pause-907557" is "Ready"
	I1202 16:08:11.024144  474687 pod_ready.go:86] duration metric: took 400.575047ms for pod "kube-scheduler-pause-907557" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:08:11.024160  474687 pod_ready.go:40] duration metric: took 1.604689785s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 16:08:11.071445  474687 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1202 16:08:11.073469  474687 out.go:179] * Done! kubectl is now configured to use "pause-907557" cluster and "default" namespace by default
	I1202 16:08:10.189736  472164 out.go:235]   - Generating certificates and keys ...
	I1202 16:08:10.189853  472164 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1202 16:08:10.189932  472164 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1202 16:08:10.423463  472164 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1202 16:08:10.729085  472164 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1202 16:08:10.883225  472164 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1202 16:08:10.990299  472164 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1202 16:08:11.196234  472164 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1202 16:08:11.196414  472164 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost missing-upgrade-881462] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1202 16:08:11.645956  472164 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1202 16:08:11.646118  472164 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost missing-upgrade-881462] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1202 16:08:11.882746  472164 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1202 16:08:12.066274  472164 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1202 16:08:12.141907  472164 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1202 16:08:12.141981  472164 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 16:08:12.405575  472164 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 16:08:12.460147  472164 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 16:08:12.649175  472164 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 16:08:12.850735  472164 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 16:08:13.177114  472164 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 16:08:13.177589  472164 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 16:08:13.184016  472164 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	
	==> CRI-O <==
	Dec 02 16:08:07 pause-907557 crio[2204]: time="2025-12-02T16:08:07.838793512Z" level=info msg="RDT not available in the host system"
	Dec 02 16:08:07 pause-907557 crio[2204]: time="2025-12-02T16:08:07.838810675Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 02 16:08:07 pause-907557 crio[2204]: time="2025-12-02T16:08:07.839905355Z" level=info msg="Conmon does support the --sync option"
	Dec 02 16:08:07 pause-907557 crio[2204]: time="2025-12-02T16:08:07.839928024Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 02 16:08:07 pause-907557 crio[2204]: time="2025-12-02T16:08:07.839943603Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 02 16:08:07 pause-907557 crio[2204]: time="2025-12-02T16:08:07.840879476Z" level=info msg="Conmon does support the --sync option"
	Dec 02 16:08:07 pause-907557 crio[2204]: time="2025-12-02T16:08:07.840911047Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 02 16:08:07 pause-907557 crio[2204]: time="2025-12-02T16:08:07.845712296Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 02 16:08:07 pause-907557 crio[2204]: time="2025-12-02T16:08:07.84574664Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 02 16:08:07 pause-907557 crio[2204]: time="2025-12-02T16:08:07.846247229Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 02 16:08:07 pause-907557 crio[2204]: time="2025-12-02T16:08:07.846655803Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 02 16:08:07 pause-907557 crio[2204]: time="2025-12-02T16:08:07.846713204Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 02 16:08:07 pause-907557 crio[2204]: time="2025-12-02T16:08:07.936863296Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-ckjzv Namespace:kube-system ID:06a3d7f4b28636a25a1eb656a0ab0e933cbc9ee70416d384116e714d7bd2795c UID:41952b1f-3ef9-414d-99f6-b4d638903867 NetNS:/var/run/netns/0a7ca600-bea9-4791-a9f3-75ac408ef58e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00060c228}] Aliases:map[]}"
	Dec 02 16:08:07 pause-907557 crio[2204]: time="2025-12-02T16:08:07.937130667Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-ckjzv for CNI network kindnet (type=ptp)"
	Dec 02 16:08:07 pause-907557 crio[2204]: time="2025-12-02T16:08:07.937657613Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 02 16:08:07 pause-907557 crio[2204]: time="2025-12-02T16:08:07.937692287Z" level=info msg="Starting seccomp notifier watcher"
	Dec 02 16:08:07 pause-907557 crio[2204]: time="2025-12-02T16:08:07.93775267Z" level=info msg="Create NRI interface"
	Dec 02 16:08:07 pause-907557 crio[2204]: time="2025-12-02T16:08:07.937908743Z" level=info msg="built-in NRI default validator is disabled"
	Dec 02 16:08:07 pause-907557 crio[2204]: time="2025-12-02T16:08:07.937929019Z" level=info msg="runtime interface created"
	Dec 02 16:08:07 pause-907557 crio[2204]: time="2025-12-02T16:08:07.93794427Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 02 16:08:07 pause-907557 crio[2204]: time="2025-12-02T16:08:07.93795277Z" level=info msg="runtime interface starting up..."
	Dec 02 16:08:07 pause-907557 crio[2204]: time="2025-12-02T16:08:07.937960535Z" level=info msg="starting plugins..."
	Dec 02 16:08:07 pause-907557 crio[2204]: time="2025-12-02T16:08:07.937977079Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 02 16:08:07 pause-907557 crio[2204]: time="2025-12-02T16:08:07.93834808Z" level=info msg="No systemd watchdog enabled"
	Dec 02 16:08:07 pause-907557 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	b2836aceeb880       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   11 seconds ago      Running             coredns                   0                   06a3d7f4b2863       coredns-66bc5c9577-ckjzv               kube-system
	34828dad597db       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   23 seconds ago      Running             kindnet-cni               0                   9c7200f739ca6       kindnet-svk5r                          kube-system
	586f014c53211       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   23 seconds ago      Running             kube-proxy                0                   430408110ff33       kube-proxy-6wbvh                       kube-system
	1ac7ddf9843ee       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   36 seconds ago      Running             kube-apiserver            0                   06a0703c6431c       kube-apiserver-pause-907557            kube-system
	7cc002479c3d2       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   36 seconds ago      Running             kube-scheduler            0                   d92e374692b74       kube-scheduler-pause-907557            kube-system
	cdfe7eda52915       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   36 seconds ago      Running             kube-controller-manager   0                   1e816873b5622       kube-controller-manager-pause-907557   kube-system
	132312565fa9d       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   36 seconds ago      Running             etcd                      0                   1ae45c58e8234       etcd-pause-907557                      kube-system
	
	
	==> coredns [b2836aceeb8807e0993320e05f6aa6c4be7c30aaaa190092f8e98f5f7dd646ec] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56667 - 20312 "HINFO IN 8430757461962317108.6158922499630476662. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019053882s
	
	
	==> describe nodes <==
	Name:               pause-907557
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-907557
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=pause-907557
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T16_07_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 16:07:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-907557
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 16:08:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 16:08:00 +0000   Tue, 02 Dec 2025 16:07:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 16:08:00 +0000   Tue, 02 Dec 2025 16:07:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 16:08:00 +0000   Tue, 02 Dec 2025 16:07:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 16:08:00 +0000   Tue, 02 Dec 2025 16:08:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-907557
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                39f3c90c-c1ff-4f22-b289-732142ace055
	  Boot ID:                    e00bac56-b076-4861-bc22-5d3b11269f73
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-ckjzv                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-pause-907557                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-svk5r                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-pause-907557             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-pause-907557    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-6wbvh                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-pause-907557             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 31s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s   kubelet          Node pause-907557 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s   kubelet          Node pause-907557 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s   kubelet          Node pause-907557 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node pause-907557 event: Registered Node pause-907557 in Controller
	  Normal  NodeReady                14s   kubelet          Node pause-907557 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 9b c8 59 55 e7 08 06
	[  +4.389247] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 07 ad 09 99 ea 08 06
	[Dec 2 15:17] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[  +1.025203] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[  +1.023929] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[Dec 2 15:18] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[  +1.023866] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[  +1.023913] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[  +2.047808] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[  +4.031697] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[  +8.511329] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[ +16.382712] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[Dec 2 15:19] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	
	
	==> etcd [132312565fa9df9459ca2fab422a4a035d2dd56ac519dec4d9ca9c4397bc628b] <==
	{"level":"warn","ts":"2025-12-02T16:07:41.445874Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-02T16:07:41.126020Z","time spent":"319.845456ms","remote":"127.0.0.1:39114","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":0,"response size":27,"request content":"key:\"/registry/clusterroles/system:aggregate-to-edit\" limit:1 "}
	{"level":"info","ts":"2025-12-02T16:07:41.445898Z","caller":"traceutil/trace.go:172","msg":"trace[1198633256] transaction","detail":"{read_only:false; response_revision:44; number_of_response:1; }","duration":"321.975483ms","start":"2025-12-02T16:07:41.123915Z","end":"2025-12-02T16:07:41.445891Z","steps":["trace[1198633256] 'process raft request'  (duration: 114.352245ms)","trace[1198633256] 'compare'  (duration: 207.456527ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T16:07:41.445926Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-02T16:07:41.123901Z","time spent":"322.011522ms","remote":"127.0.0.1:39264","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":705,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/prioritylevelconfigurations/global-default\" mod_revision:0 > success:<request_put:<key:\"/registry/prioritylevelconfigurations/global-default\" value_size:645 >> failure:<>"}
	{"level":"info","ts":"2025-12-02T16:07:41.573794Z","caller":"traceutil/trace.go:172","msg":"trace[306217900] linearizableReadLoop","detail":"{readStateIndex:48; appliedIndex:48; }","duration":"124.282405ms","start":"2025-12-02T16:07:41.449487Z","end":"2025-12-02T16:07:41.573770Z","steps":["trace[306217900] 'read index received'  (duration: 124.27533ms)","trace[306217900] 'applied index is now lower than readState.Index'  (duration: 6.058µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T16:07:41.824192Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"374.681379ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregate-to-view\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-12-02T16:07:41.824263Z","caller":"traceutil/trace.go:172","msg":"trace[1003424393] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-view; range_end:; response_count:0; response_revision:44; }","duration":"374.766397ms","start":"2025-12-02T16:07:41.449483Z","end":"2025-12-02T16:07:41.824250Z","steps":["trace[1003424393] 'agreement among raft nodes before linearized reading'  (duration: 124.36671ms)","trace[1003424393] 'range keys from in-memory index tree'  (duration: 250.284002ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T16:07:41.824337Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-02T16:07:41.449471Z","time spent":"374.816886ms","remote":"127.0.0.1:39114","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":0,"response size":27,"request content":"key:\"/registry/clusterroles/system:aggregate-to-view\" limit:1 "}
	{"level":"warn","ts":"2025-12-02T16:07:41.824322Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"250.455934ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597456650294978 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/flowschemas/system-nodes\" mod_revision:0 > success:<request_put:<key:\"/registry/flowschemas/system-nodes\" value_size:595 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-12-02T16:07:41.824444Z","caller":"traceutil/trace.go:172","msg":"trace[1766645835] transaction","detail":"{read_only:false; response_revision:45; number_of_response:1; }","duration":"375.709417ms","start":"2025-12-02T16:07:41.448706Z","end":"2025-12-02T16:07:41.824416Z","steps":["trace[1766645835] 'process raft request'  (duration: 125.11328ms)","trace[1766645835] 'compare'  (duration: 250.350123ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T16:07:41.824496Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-02T16:07:41.448689Z","time spent":"375.781217ms","remote":"127.0.0.1:39252","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":637,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/flowschemas/system-nodes\" mod_revision:0 > success:<request_put:<key:\"/registry/flowschemas/system-nodes\" value_size:595 >> failure:<>"}
	{"level":"info","ts":"2025-12-02T16:07:41.949345Z","caller":"traceutil/trace.go:172","msg":"trace[88265753] linearizableReadLoop","detail":"{readStateIndex:49; appliedIndex:49; }","duration":"121.014024ms","start":"2025-12-02T16:07:41.828308Z","end":"2025-12-02T16:07:41.949322Z","steps":["trace[88265753] 'read index received'  (duration: 121.005988ms)","trace[88265753] 'applied index is now lower than readState.Index'  (duration: 6.301µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T16:07:41.950551Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"122.22124ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:discovery\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-12-02T16:07:41.950604Z","caller":"traceutil/trace.go:172","msg":"trace[77218913] range","detail":"{range_begin:/registry/clusterrolebindings/system:discovery; range_end:; response_count:0; response_revision:45; }","duration":"122.289235ms","start":"2025-12-02T16:07:41.828303Z","end":"2025-12-02T16:07:41.950593Z","steps":["trace[77218913] 'agreement among raft nodes before linearized reading'  (duration: 121.104007ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T16:07:41.951318Z","caller":"traceutil/trace.go:172","msg":"trace[1054494770] transaction","detail":"{read_only:false; response_revision:47; number_of_response:1; }","duration":"122.519287ms","start":"2025-12-02T16:07:41.828786Z","end":"2025-12-02T16:07:41.951305Z","steps":["trace[1054494770] 'process raft request'  (duration: 122.452511ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T16:07:41.951324Z","caller":"traceutil/trace.go:172","msg":"trace[1931375124] transaction","detail":"{read_only:false; response_revision:46; number_of_response:1; }","duration":"124.301632ms","start":"2025-12-02T16:07:41.827006Z","end":"2025-12-02T16:07:41.951307Z","steps":["trace[1931375124] 'process raft request'  (duration: 122.395167ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T16:07:42.054939Z","caller":"traceutil/trace.go:172","msg":"trace[1800630730] transaction","detail":"{read_only:false; response_revision:48; number_of_response:1; }","duration":"100.590416ms","start":"2025-12-02T16:07:41.954324Z","end":"2025-12-02T16:07:42.054915Z","steps":["trace[1800630730] 'process raft request'  (duration: 96.760514ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T16:07:42.055041Z","caller":"traceutil/trace.go:172","msg":"trace[637785066] transaction","detail":"{read_only:false; response_revision:49; number_of_response:1; }","duration":"100.020236ms","start":"2025-12-02T16:07:41.954971Z","end":"2025-12-02T16:07:42.054991Z","steps":["trace[637785066] 'process raft request'  (duration: 99.886118ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-02T16:08:01.845136Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"157.222977ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-02T16:08:01.845232Z","caller":"traceutil/trace.go:172","msg":"trace[1314016532] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:387; }","duration":"157.324948ms","start":"2025-12-02T16:08:01.687888Z","end":"2025-12-02T16:08:01.845213Z","steps":["trace[1314016532] 'range keys from in-memory index tree'  (duration: 157.143333ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-02T16:08:01.845156Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"178.411725ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-02T16:08:01.845410Z","caller":"traceutil/trace.go:172","msg":"trace[280292306] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:387; }","duration":"178.635845ms","start":"2025-12-02T16:08:01.666722Z","end":"2025-12-02T16:08:01.845358Z","steps":["trace[280292306] 'range keys from in-memory index tree'  (duration: 178.359962ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T16:08:01.863554Z","caller":"traceutil/trace.go:172","msg":"trace[30696123] transaction","detail":"{read_only:false; response_revision:388; number_of_response:1; }","duration":"133.299309ms","start":"2025-12-02T16:08:01.730235Z","end":"2025-12-02T16:08:01.863534Z","steps":["trace[30696123] 'process raft request'  (duration: 133.074487ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-02T16:08:02.368918Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"148.620457ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-02T16:08:02.368979Z","caller":"traceutil/trace.go:172","msg":"trace[661681382] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:390; }","duration":"148.691856ms","start":"2025-12-02T16:08:02.220274Z","end":"2025-12-02T16:08:02.368966Z","steps":["trace[661681382] 'range keys from in-memory index tree'  (duration: 148.53513ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T16:08:03.725557Z","caller":"traceutil/trace.go:172","msg":"trace[374387627] transaction","detail":"{read_only:false; response_revision:400; number_of_response:1; }","duration":"121.076042ms","start":"2025-12-02T16:08:03.604455Z","end":"2025-12-02T16:08:03.725531Z","steps":["trace[374387627] 'process raft request'  (duration: 120.889836ms)"],"step_count":1}
	
	
	==> kernel <==
	 16:08:14 up  2:50,  0 user,  load average: 4.04, 1.80, 1.31
	Linux pause-907557 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [34828dad597db079c97a036969df0740139e6fd38885ad5627968129aef7c2b3] <==
	I1202 16:07:50.263994       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 16:07:50.357878       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1202 16:07:50.358066       1 main.go:148] setting mtu 1500 for CNI 
	I1202 16:07:50.358088       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 16:07:50.358117       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T16:07:50Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 16:07:50.560051       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 16:07:50.560099       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 16:07:50.560111       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 16:07:50.560231       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1202 16:07:50.957840       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 16:07:50.957890       1 metrics.go:72] Registering metrics
	I1202 16:07:50.958047       1 controller.go:711] "Syncing nftables rules"
	I1202 16:08:00.563524       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1202 16:08:00.563624       1 main.go:301] handling current node
	I1202 16:08:10.566532       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1202 16:08:10.566563       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1ac7ddf9843eebd770bec15da5164025aa9877f89ae53a56ffdd6e14a093fe56] <==
	I1202 16:07:39.750102       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 16:07:39.750363       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	E1202 16:07:39.750633       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	E1202 16:07:39.750779       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1202 16:07:39.750960       1 controller.go:667] quota admission added evaluator for: namespaces
	I1202 16:07:39.853576       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 16:07:39.854635       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1202 16:07:40.262882       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 16:07:41.113218       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1202 16:07:41.123845       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1202 16:07:41.123870       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1202 16:07:42.647766       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 16:07:42.703824       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 16:07:42.860514       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1202 16:07:42.868695       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1202 16:07:42.870283       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 16:07:42.877222       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 16:07:43.587622       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 16:07:43.758990       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1202 16:07:43.773771       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1202 16:07:43.782804       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1202 16:07:49.296851       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 16:07:49.301367       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 16:07:49.643896       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1202 16:07:49.689739       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [cdfe7eda529156977893291247b97065289958fe65cbac19931af954d1f7e904] <==
	I1202 16:07:48.585946       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1202 16:07:48.585959       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1202 16:07:48.586174       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1202 16:07:48.586329       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1202 16:07:48.586477       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1202 16:07:48.586630       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1202 16:07:48.586714       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-907557"
	I1202 16:07:48.586761       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1202 16:07:48.587267       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1202 16:07:48.587355       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1202 16:07:48.588182       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1202 16:07:48.588259       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1202 16:07:48.588345       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1202 16:07:48.588407       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1202 16:07:48.588649       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1202 16:07:48.588688       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1202 16:07:48.589393       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1202 16:07:48.589402       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1202 16:07:48.589725       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1202 16:07:48.590665       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1202 16:07:48.594794       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 16:07:48.599527       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1202 16:07:48.605939       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1202 16:07:48.608641       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 16:08:03.727062       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [586f014c53211c1af9d8288055382380c3d51998056d288238f813c46118b641] <==
	I1202 16:07:50.127455       1 server_linux.go:53] "Using iptables proxy"
	I1202 16:07:50.189364       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 16:07:50.289787       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 16:07:50.289831       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1202 16:07:50.289962       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 16:07:50.310959       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 16:07:50.311006       1 server_linux.go:132] "Using iptables Proxier"
	I1202 16:07:50.317487       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 16:07:50.318071       1 server.go:527] "Version info" version="v1.34.2"
	I1202 16:07:50.320086       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 16:07:50.322190       1 config.go:200] "Starting service config controller"
	I1202 16:07:50.327638       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 16:07:50.322641       1 config.go:309] "Starting node config controller"
	I1202 16:07:50.327683       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 16:07:50.327689       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 16:07:50.326112       1 config.go:106] "Starting endpoint slice config controller"
	I1202 16:07:50.327699       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 16:07:50.326098       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 16:07:50.327706       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 16:07:50.428744       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 16:07:50.428785       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 16:07:50.430007       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7cc002479c3d20848066c689b18ebdf1db75e87f1c451b1526e550789e7a63fa] <==
	E1202 16:07:39.625871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1202 16:07:39.625871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1202 16:07:39.625950       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1202 16:07:39.626006       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1202 16:07:40.460590       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1202 16:07:40.485977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1202 16:07:40.486726       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1202 16:07:40.550589       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1202 16:07:40.629566       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1202 16:07:40.728461       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1202 16:07:40.742845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1202 16:07:40.765319       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1202 16:07:40.775965       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1202 16:07:40.820409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1202 16:07:40.843069       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1202 16:07:40.850504       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1202 16:07:40.976306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1202 16:07:41.122251       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1202 16:07:41.147646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1202 16:07:41.152712       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1202 16:07:41.153480       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1202 16:07:41.215170       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1202 16:07:41.227783       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1202 16:07:42.431320       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1202 16:07:43.620165       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 02 16:07:44 pause-907557 kubelet[1341]: E1202 16:07:44.667350    1341 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-907557\" already exists" pod="kube-system/kube-apiserver-pause-907557"
	Dec 02 16:07:44 pause-907557 kubelet[1341]: I1202 16:07:44.720439    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-907557" podStartSLOduration=1.7203934140000001 podStartE2EDuration="1.720393414s" podCreationTimestamp="2025-12-02 16:07:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 16:07:44.70863584 +0000 UTC m=+1.178923066" watchObservedRunningTime="2025-12-02 16:07:44.720393414 +0000 UTC m=+1.190680641"
	Dec 02 16:07:44 pause-907557 kubelet[1341]: I1202 16:07:44.737610    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-907557" podStartSLOduration=1.7375809800000002 podStartE2EDuration="1.73758098s" podCreationTimestamp="2025-12-02 16:07:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 16:07:44.72059503 +0000 UTC m=+1.190882274" watchObservedRunningTime="2025-12-02 16:07:44.73758098 +0000 UTC m=+1.207868200"
	Dec 02 16:07:44 pause-907557 kubelet[1341]: I1202 16:07:44.737820    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-907557" podStartSLOduration=1.737805699 podStartE2EDuration="1.737805699s" podCreationTimestamp="2025-12-02 16:07:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 16:07:44.737755284 +0000 UTC m=+1.208042506" watchObservedRunningTime="2025-12-02 16:07:44.737805699 +0000 UTC m=+1.208092931"
	Dec 02 16:07:44 pause-907557 kubelet[1341]: I1202 16:07:44.747709    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-907557" podStartSLOduration=1.7476860950000002 podStartE2EDuration="1.747686095s" podCreationTimestamp="2025-12-02 16:07:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 16:07:44.747660507 +0000 UTC m=+1.217947715" watchObservedRunningTime="2025-12-02 16:07:44.747686095 +0000 UTC m=+1.217973325"
	Dec 02 16:07:48 pause-907557 kubelet[1341]: I1202 16:07:48.572786    1341 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 02 16:07:48 pause-907557 kubelet[1341]: I1202 16:07:48.573572    1341 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 02 16:07:49 pause-907557 kubelet[1341]: I1202 16:07:49.754450    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c5vd\" (UniqueName: \"kubernetes.io/projected/6a32f68e-4724-4380-8045-ca504c4294c9-kube-api-access-4c5vd\") pod \"kindnet-svk5r\" (UID: \"6a32f68e-4724-4380-8045-ca504c4294c9\") " pod="kube-system/kindnet-svk5r"
	Dec 02 16:07:49 pause-907557 kubelet[1341]: I1202 16:07:49.754505    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/402e8b88-66d2-4e4f-b0e5-693b8e9ee4b7-kube-proxy\") pod \"kube-proxy-6wbvh\" (UID: \"402e8b88-66d2-4e4f-b0e5-693b8e9ee4b7\") " pod="kube-system/kube-proxy-6wbvh"
	Dec 02 16:07:49 pause-907557 kubelet[1341]: I1202 16:07:49.754539    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds54c\" (UniqueName: \"kubernetes.io/projected/402e8b88-66d2-4e4f-b0e5-693b8e9ee4b7-kube-api-access-ds54c\") pod \"kube-proxy-6wbvh\" (UID: \"402e8b88-66d2-4e4f-b0e5-693b8e9ee4b7\") " pod="kube-system/kube-proxy-6wbvh"
	Dec 02 16:07:49 pause-907557 kubelet[1341]: I1202 16:07:49.754562    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6a32f68e-4724-4380-8045-ca504c4294c9-cni-cfg\") pod \"kindnet-svk5r\" (UID: \"6a32f68e-4724-4380-8045-ca504c4294c9\") " pod="kube-system/kindnet-svk5r"
	Dec 02 16:07:49 pause-907557 kubelet[1341]: I1202 16:07:49.754657    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a32f68e-4724-4380-8045-ca504c4294c9-xtables-lock\") pod \"kindnet-svk5r\" (UID: \"6a32f68e-4724-4380-8045-ca504c4294c9\") " pod="kube-system/kindnet-svk5r"
	Dec 02 16:07:49 pause-907557 kubelet[1341]: I1202 16:07:49.754712    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/402e8b88-66d2-4e4f-b0e5-693b8e9ee4b7-xtables-lock\") pod \"kube-proxy-6wbvh\" (UID: \"402e8b88-66d2-4e4f-b0e5-693b8e9ee4b7\") " pod="kube-system/kube-proxy-6wbvh"
	Dec 02 16:07:49 pause-907557 kubelet[1341]: I1202 16:07:49.754743    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/402e8b88-66d2-4e4f-b0e5-693b8e9ee4b7-lib-modules\") pod \"kube-proxy-6wbvh\" (UID: \"402e8b88-66d2-4e4f-b0e5-693b8e9ee4b7\") " pod="kube-system/kube-proxy-6wbvh"
	Dec 02 16:07:49 pause-907557 kubelet[1341]: I1202 16:07:49.754765    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a32f68e-4724-4380-8045-ca504c4294c9-lib-modules\") pod \"kindnet-svk5r\" (UID: \"6a32f68e-4724-4380-8045-ca504c4294c9\") " pod="kube-system/kindnet-svk5r"
	Dec 02 16:07:50 pause-907557 kubelet[1341]: I1202 16:07:50.695658    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6wbvh" podStartSLOduration=1.6956353370000001 podStartE2EDuration="1.695635337s" podCreationTimestamp="2025-12-02 16:07:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 16:07:50.695507547 +0000 UTC m=+7.165794774" watchObservedRunningTime="2025-12-02 16:07:50.695635337 +0000 UTC m=+7.165922564"
	Dec 02 16:07:50 pause-907557 kubelet[1341]: I1202 16:07:50.695795    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-svk5r" podStartSLOduration=1.695783519 podStartE2EDuration="1.695783519s" podCreationTimestamp="2025-12-02 16:07:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 16:07:50.685283997 +0000 UTC m=+7.155571224" watchObservedRunningTime="2025-12-02 16:07:50.695783519 +0000 UTC m=+7.166070746"
	Dec 02 16:08:00 pause-907557 kubelet[1341]: I1202 16:08:00.987625    1341 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 02 16:08:01 pause-907557 kubelet[1341]: I1202 16:08:01.140562    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/41952b1f-3ef9-414d-99f6-b4d638903867-config-volume\") pod \"coredns-66bc5c9577-ckjzv\" (UID: \"41952b1f-3ef9-414d-99f6-b4d638903867\") " pod="kube-system/coredns-66bc5c9577-ckjzv"
	Dec 02 16:08:01 pause-907557 kubelet[1341]: I1202 16:08:01.140604    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flkb5\" (UniqueName: \"kubernetes.io/projected/41952b1f-3ef9-414d-99f6-b4d638903867-kube-api-access-flkb5\") pod \"coredns-66bc5c9577-ckjzv\" (UID: \"41952b1f-3ef9-414d-99f6-b4d638903867\") " pod="kube-system/coredns-66bc5c9577-ckjzv"
	Dec 02 16:08:02 pause-907557 kubelet[1341]: I1202 16:08:02.811291    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-ckjzv" podStartSLOduration=13.811258253 podStartE2EDuration="13.811258253s" podCreationTimestamp="2025-12-02 16:07:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 16:08:02.80881092 +0000 UTC m=+19.279098147" watchObservedRunningTime="2025-12-02 16:08:02.811258253 +0000 UTC m=+19.281545483"
	Dec 02 16:08:11 pause-907557 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 02 16:08:11 pause-907557 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 02 16:08:11 pause-907557 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 16:08:11 pause-907557 systemd[1]: kubelet.service: Consumed 1.299s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-907557 -n pause-907557
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-907557 -n pause-907557: exit status 2 (442.83448ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-907557 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-907557
helpers_test.go:243: (dbg) docker inspect pause-907557:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1703ec85b899598dbc5fc149aa069267a53515122a6a3a2b021ec0e6ad44fd93",
	        "Created": "2025-12-02T16:07:13.118033261Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 462078,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T16:07:13.231842186Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/1703ec85b899598dbc5fc149aa069267a53515122a6a3a2b021ec0e6ad44fd93/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1703ec85b899598dbc5fc149aa069267a53515122a6a3a2b021ec0e6ad44fd93/hostname",
	        "HostsPath": "/var/lib/docker/containers/1703ec85b899598dbc5fc149aa069267a53515122a6a3a2b021ec0e6ad44fd93/hosts",
	        "LogPath": "/var/lib/docker/containers/1703ec85b899598dbc5fc149aa069267a53515122a6a3a2b021ec0e6ad44fd93/1703ec85b899598dbc5fc149aa069267a53515122a6a3a2b021ec0e6ad44fd93-json.log",
	        "Name": "/pause-907557",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-907557:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-907557",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1703ec85b899598dbc5fc149aa069267a53515122a6a3a2b021ec0e6ad44fd93",
	                "LowerDir": "/var/lib/docker/overlay2/d02d7a352a775308f0914038d3d1a1bcb04fea5d36d1d76375f924ef3a2c24df-init/diff:/var/lib/docker/overlay2/ab98578cee54140c21ba2edb7c02601b9799fbaa027f05ce4daaae66d198c082/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d02d7a352a775308f0914038d3d1a1bcb04fea5d36d1d76375f924ef3a2c24df/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d02d7a352a775308f0914038d3d1a1bcb04fea5d36d1d76375f924ef3a2c24df/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d02d7a352a775308f0914038d3d1a1bcb04fea5d36d1d76375f924ef3a2c24df/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-907557",
	                "Source": "/var/lib/docker/volumes/pause-907557/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-907557",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-907557",
	                "name.minikube.sigs.k8s.io": "pause-907557",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a888f19e369313d7cdccded30acab63611faab6ac0522d47662f5acccd4248b0",
	            "SandboxKey": "/var/run/docker/netns/a888f19e3693",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-907557": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "dbdc5e8fde6f66d7813af3b29cbabf22efadef370f7024cd569312c85aaf9c38",
	                    "EndpointID": "23c2ac36d67c52dfaed52ae4376d1165509445931bfc4a85c7017f4ad7d597fd",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "36:ce:7f:19:57:0b",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-907557",
	                        "1703ec85b899"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-907557 -n pause-907557
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-907557 -n pause-907557: exit status 2 (452.508372ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-907557 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-907557 logs -n 25: (1.165058264s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                       ARGS                                                        │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p scheduled-stop-259576 --memory=3072 --driver=docker  --container-runtime=crio                                  │ scheduled-stop-259576       │ jenkins │ v1.37.0 │ 02 Dec 25 16:05 UTC │ 02 Dec 25 16:05 UTC │
	│ stop    │ -p scheduled-stop-259576 --schedule 5m -v=5 --alsologtostderr                                                     │ scheduled-stop-259576       │ jenkins │ v1.37.0 │ 02 Dec 25 16:05 UTC │                     │
	│ stop    │ -p scheduled-stop-259576 --schedule 5m -v=5 --alsologtostderr                                                     │ scheduled-stop-259576       │ jenkins │ v1.37.0 │ 02 Dec 25 16:05 UTC │                     │
	│ stop    │ -p scheduled-stop-259576 --schedule 5m -v=5 --alsologtostderr                                                     │ scheduled-stop-259576       │ jenkins │ v1.37.0 │ 02 Dec 25 16:05 UTC │                     │
	│ stop    │ -p scheduled-stop-259576 --schedule 15s -v=5 --alsologtostderr                                                    │ scheduled-stop-259576       │ jenkins │ v1.37.0 │ 02 Dec 25 16:05 UTC │                     │
	│ stop    │ -p scheduled-stop-259576 --schedule 15s -v=5 --alsologtostderr                                                    │ scheduled-stop-259576       │ jenkins │ v1.37.0 │ 02 Dec 25 16:05 UTC │                     │
	│ stop    │ -p scheduled-stop-259576 --schedule 15s -v=5 --alsologtostderr                                                    │ scheduled-stop-259576       │ jenkins │ v1.37.0 │ 02 Dec 25 16:05 UTC │                     │
	│ stop    │ -p scheduled-stop-259576 --cancel-scheduled                                                                       │ scheduled-stop-259576       │ jenkins │ v1.37.0 │ 02 Dec 25 16:05 UTC │ 02 Dec 25 16:05 UTC │
	│ stop    │ -p scheduled-stop-259576 --schedule 15s -v=5 --alsologtostderr                                                    │ scheduled-stop-259576       │ jenkins │ v1.37.0 │ 02 Dec 25 16:06 UTC │                     │
	│ stop    │ -p scheduled-stop-259576 --schedule 15s -v=5 --alsologtostderr                                                    │ scheduled-stop-259576       │ jenkins │ v1.37.0 │ 02 Dec 25 16:06 UTC │                     │
	│ stop    │ -p scheduled-stop-259576 --schedule 15s -v=5 --alsologtostderr                                                    │ scheduled-stop-259576       │ jenkins │ v1.37.0 │ 02 Dec 25 16:06 UTC │ 02 Dec 25 16:06 UTC │
	│ delete  │ -p scheduled-stop-259576                                                                                          │ scheduled-stop-259576       │ jenkins │ v1.37.0 │ 02 Dec 25 16:06 UTC │ 02 Dec 25 16:06 UTC │
	│ start   │ -p insufficient-storage-319725 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio  │ insufficient-storage-319725 │ jenkins │ v1.37.0 │ 02 Dec 25 16:06 UTC │                     │
	│ delete  │ -p insufficient-storage-319725                                                                                    │ insufficient-storage-319725 │ jenkins │ v1.37.0 │ 02 Dec 25 16:07 UTC │ 02 Dec 25 16:07 UTC │
	│ start   │ -p pause-907557 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio         │ pause-907557                │ jenkins │ v1.37.0 │ 02 Dec 25 16:07 UTC │ 02 Dec 25 16:08 UTC │
	│ start   │ -p offline-crio-893562 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio │ offline-crio-893562         │ jenkins │ v1.37.0 │ 02 Dec 25 16:07 UTC │ 02 Dec 25 16:07 UTC │
	│ start   │ -p running-upgrade-136818 --memory=3072 --vm-driver=docker  --container-runtime=crio                              │ running-upgrade-136818      │ jenkins │ v1.35.0 │ 02 Dec 25 16:07 UTC │ 02 Dec 25 16:07 UTC │
	│ start   │ -p stopped-upgrade-937293 --memory=3072 --vm-driver=docker  --container-runtime=crio                              │ stopped-upgrade-937293      │ jenkins │ v1.35.0 │ 02 Dec 25 16:07 UTC │ 02 Dec 25 16:07 UTC │
	│ delete  │ -p offline-crio-893562                                                                                            │ offline-crio-893562         │ jenkins │ v1.37.0 │ 02 Dec 25 16:07 UTC │ 02 Dec 25 16:07 UTC │
	│ stop    │ stopped-upgrade-937293 stop                                                                                       │ stopped-upgrade-937293      │ jenkins │ v1.35.0 │ 02 Dec 25 16:07 UTC │ 02 Dec 25 16:08 UTC │
	│ start   │ -p running-upgrade-136818 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio          │ running-upgrade-136818      │ jenkins │ v1.37.0 │ 02 Dec 25 16:07 UTC │                     │
	│ start   │ -p missing-upgrade-881462 --memory=3072 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-881462      │ jenkins │ v1.35.0 │ 02 Dec 25 16:07 UTC │                     │
	│ start   │ -p stopped-upgrade-937293 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio          │ stopped-upgrade-937293      │ jenkins │ v1.37.0 │ 02 Dec 25 16:08 UTC │                     │
	│ start   │ -p pause-907557 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                  │ pause-907557                │ jenkins │ v1.37.0 │ 02 Dec 25 16:08 UTC │ 02 Dec 25 16:08 UTC │
	│ pause   │ -p pause-907557 --alsologtostderr -v=5                                                                            │ pause-907557                │ jenkins │ v1.37.0 │ 02 Dec 25 16:08 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 16:08:04
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 16:08:04.660286  474687 out.go:360] Setting OutFile to fd 1 ...
	I1202 16:08:04.660602  474687 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:08:04.660617  474687 out.go:374] Setting ErrFile to fd 2...
	I1202 16:08:04.660622  474687 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:08:04.660939  474687 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 16:08:04.661417  474687 out.go:368] Setting JSON to false
	I1202 16:08:04.662981  474687 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":10226,"bootTime":1764681459,"procs":300,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 16:08:04.663094  474687 start.go:143] virtualization: kvm guest
	I1202 16:08:04.665241  474687 out.go:179] * [pause-907557] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 16:08:04.667410  474687 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 16:08:04.667460  474687 notify.go:221] Checking for updates...
	I1202 16:08:04.670019  474687 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 16:08:04.673302  474687 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 16:08:04.674597  474687 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-264555/.minikube
	I1202 16:08:04.675744  474687 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 16:08:04.680254  474687 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 16:08:04.682115  474687 config.go:182] Loaded profile config "pause-907557": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 16:08:04.682972  474687 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 16:08:04.711666  474687 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 16:08:04.711798  474687 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 16:08:04.800599  474687 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:82 OomKillDisable:false NGoroutines:89 SystemTime:2025-12-02 16:08:04.786635445 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 16:08:04.801221  474687 docker.go:319] overlay module found
	I1202 16:08:04.805254  474687 out.go:179] * Using the docker driver based on existing profile
	I1202 16:08:04.806522  474687 start.go:309] selected driver: docker
	I1202 16:08:04.806545  474687 start.go:927] validating driver "docker" against &{Name:pause-907557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-907557 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 16:08:04.806755  474687 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 16:08:04.806888  474687 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 16:08:04.903441  474687 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:84 OomKillDisable:false NGoroutines:90 SystemTime:2025-12-02 16:08:04.889586367 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 16:08:04.904454  474687 cni.go:84] Creating CNI manager for ""
	I1202 16:08:04.904539  474687 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 16:08:04.904622  474687 start.go:353] cluster config:
	{Name:pause-907557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-907557 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 16:08:04.908854  474687 out.go:179] * Starting "pause-907557" primary control-plane node in "pause-907557" cluster
	I1202 16:08:04.910224  474687 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 16:08:04.911994  474687 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 16:08:04.913813  474687 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 16:08:04.913856  474687 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22021-264555/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1202 16:08:04.913868  474687 cache.go:65] Caching tarball of preloaded images
	I1202 16:08:04.913854  474687 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 16:08:04.913988  474687 preload.go:238] Found /home/jenkins/minikube-integration/22021-264555/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 16:08:04.913999  474687 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 16:08:04.914174  474687 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/pause-907557/config.json ...
	I1202 16:08:04.942384  474687 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 16:08:04.942488  474687 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 16:08:04.942535  474687 cache.go:243] Successfully downloaded all kic artifacts
	I1202 16:08:04.942612  474687 start.go:360] acquireMachinesLock for pause-907557: {Name:mkcf3bb036c9115abf66275504f1edf44ef5f737 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:08:04.942782  474687 start.go:364] duration metric: took 56.275µs to acquireMachinesLock for "pause-907557"
	I1202 16:08:04.942838  474687 start.go:96] Skipping create...Using existing machine configuration
	I1202 16:08:04.942848  474687 fix.go:54] fixHost starting: 
	I1202 16:08:04.943151  474687 cli_runner.go:164] Run: docker container inspect pause-907557 --format={{.State.Status}}
	I1202 16:08:04.979618  474687 fix.go:112] recreateIfNeeded on pause-907557: state=Running err=<nil>
	W1202 16:08:04.979644  474687 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 16:08:00.637166  473081 out.go:252] * Restarting existing docker container for "stopped-upgrade-937293" ...
	I1202 16:08:00.637296  473081 cli_runner.go:164] Run: docker start stopped-upgrade-937293
	I1202 16:08:00.970562  473081 cli_runner.go:164] Run: docker container inspect stopped-upgrade-937293 --format={{.State.Status}}
	I1202 16:08:00.995085  473081 kic.go:430] container "stopped-upgrade-937293" state is running.
	I1202 16:08:00.996079  473081 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-937293
	I1202 16:08:01.023639  473081 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/stopped-upgrade-937293/config.json ...
	I1202 16:08:01.023946  473081 machine.go:94] provisionDockerMachine start ...
	I1202 16:08:01.024030  473081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-937293
	I1202 16:08:01.051546  473081 main.go:143] libmachine: Using SSH client type: native
	I1202 16:08:01.051905  473081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1202 16:08:01.051922  473081 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 16:08:01.052746  473081 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33716->127.0.0.1:33119: read: connection reset by peer
	I1202 16:08:04.187005  473081 main.go:143] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-937293
	
	I1202 16:08:04.187045  473081 ubuntu.go:182] provisioning hostname "stopped-upgrade-937293"
	I1202 16:08:04.187121  473081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-937293
	I1202 16:08:04.207830  473081 main.go:143] libmachine: Using SSH client type: native
	I1202 16:08:04.208158  473081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1202 16:08:04.208181  473081 main.go:143] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-937293 && echo "stopped-upgrade-937293" | sudo tee /etc/hostname
	I1202 16:08:04.357518  473081 main.go:143] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-937293
	
	I1202 16:08:04.357630  473081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-937293
	I1202 16:08:04.379822  473081 main.go:143] libmachine: Using SSH client type: native
	I1202 16:08:04.380122  473081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1202 16:08:04.380149  473081 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-937293' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-937293/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-937293' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 16:08:04.516702  473081 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 16:08:04.516735  473081 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-264555/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-264555/.minikube}
	I1202 16:08:04.516765  473081 ubuntu.go:190] setting up certificates
	I1202 16:08:04.516778  473081 provision.go:84] configureAuth start
	I1202 16:08:04.516843  473081 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-937293
	I1202 16:08:04.540913  473081 provision.go:143] copyHostCerts
	I1202 16:08:04.540981  473081 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem, removing ...
	I1202 16:08:04.540994  473081 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem
	I1202 16:08:04.541076  473081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem (1082 bytes)
	I1202 16:08:04.541193  473081 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem, removing ...
	I1202 16:08:04.541200  473081 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem
	I1202 16:08:04.541241  473081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem (1123 bytes)
	I1202 16:08:04.541319  473081 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem, removing ...
	I1202 16:08:04.541501  473081 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem
	I1202 16:08:04.541581  473081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem (1675 bytes)
	I1202 16:08:04.541721  473081 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-937293 san=[127.0.0.1 192.168.103.2 localhost minikube stopped-upgrade-937293]
	I1202 16:08:04.675802  473081 provision.go:177] copyRemoteCerts
	I1202 16:08:04.675872  473081 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 16:08:04.675949  473081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-937293
	I1202 16:08:04.699013  473081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/stopped-upgrade-937293/id_rsa Username:docker}
	I1202 16:08:04.807550  473081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1202 16:08:04.864066  473081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 16:08:04.906701  473081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 16:08:04.946022  473081 provision.go:87] duration metric: took 429.226837ms to configureAuth
	I1202 16:08:04.946051  473081 ubuntu.go:206] setting minikube options for container-runtime
	I1202 16:08:04.946250  473081 config.go:182] Loaded profile config "stopped-upgrade-937293": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1202 16:08:04.946388  473081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-937293
	I1202 16:08:04.979036  473081 main.go:143] libmachine: Using SSH client type: native
	I1202 16:08:04.979516  473081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1202 16:08:04.979539  473081 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 16:08:05.377267  473081 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 16:08:05.377295  473081 machine.go:97] duration metric: took 4.353329264s to provisionDockerMachine
	I1202 16:08:05.377309  473081 start.go:293] postStartSetup for "stopped-upgrade-937293" (driver="docker")
	I1202 16:08:05.377322  473081 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 16:08:05.377384  473081 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 16:08:05.377442  473081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-937293
	I1202 16:08:05.402515  473081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/stopped-upgrade-937293/id_rsa Username:docker}
	I1202 16:08:05.510037  473081 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 16:08:05.515043  473081 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 16:08:05.515095  473081 main.go:143] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1202 16:08:05.515106  473081 main.go:143] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1202 16:08:05.515123  473081 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1202 16:08:05.515142  473081 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-264555/.minikube/addons for local assets ...
	I1202 16:08:05.515207  473081 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-264555/.minikube/files for local assets ...
	I1202 16:08:05.515306  473081 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem -> 2680992.pem in /etc/ssl/certs
	I1202 16:08:05.515452  473081 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 16:08:05.528147  473081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem --> /etc/ssl/certs/2680992.pem (1708 bytes)
	I1202 16:08:05.556820  473081 start.go:296] duration metric: took 179.496111ms for postStartSetup
	I1202 16:08:05.556905  473081 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 16:08:05.556942  473081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-937293
	I1202 16:08:05.582202  473081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/stopped-upgrade-937293/id_rsa Username:docker}
	I1202 16:08:05.681104  473081 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 16:08:05.688213  473081 fix.go:56] duration metric: took 5.079350199s for fixHost
	I1202 16:08:05.688241  473081 start.go:83] releasing machines lock for "stopped-upgrade-937293", held for 5.079404594s
	I1202 16:08:05.688309  473081 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-937293
	I1202 16:08:05.712241  473081 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 16:08:05.712310  473081 ssh_runner.go:195] Run: cat /version.json
	I1202 16:08:05.712336  473081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-937293
	I1202 16:08:05.712361  473081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-937293
	I1202 16:08:05.736195  473081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/stopped-upgrade-937293/id_rsa Username:docker}
	I1202 16:08:05.738318  473081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/stopped-upgrade-937293/id_rsa Username:docker}
	W1202 16:08:05.915849  473081 out.go:285] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.35.0 -> Actual minikube version: v1.37.0
	I1202 16:08:05.915948  473081 ssh_runner.go:195] Run: systemctl --version
	I1202 16:08:05.921646  473081 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 16:08:06.072409  473081 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1202 16:08:06.079884  473081 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 16:08:06.093718  473081 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1202 16:08:06.093803  473081 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 16:08:06.107788  473081 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 16:08:06.107884  473081 start.go:496] detecting cgroup driver to use...
	I1202 16:08:06.107925  473081 detect.go:190] detected "systemd" cgroup driver on host os
	I1202 16:08:06.107990  473081 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 16:08:06.123647  473081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 16:08:06.139271  473081 docker.go:218] disabling cri-docker service (if available) ...
	I1202 16:08:06.139344  473081 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 16:08:06.154995  473081 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 16:08:06.169454  473081 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 16:08:06.247760  473081 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 16:08:06.328557  473081 docker.go:234] disabling docker service ...
	I1202 16:08:06.328637  473081 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 16:08:06.342836  473081 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 16:08:06.355678  473081 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 16:08:06.428680  473081 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 16:08:06.517340  473081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 16:08:06.532107  473081 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 16:08:06.556936  473081 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1202 16:08:06.556984  473081 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:08:06.572569  473081 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1202 16:08:06.572628  473081 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:08:06.585906  473081 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:08:06.599941  473081 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:08:06.613022  473081 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 16:08:06.625470  473081 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:08:06.638163  473081 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:08:06.649894  473081 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:08:06.661374  473081 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 16:08:06.671280  473081 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 16:08:06.681481  473081 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 16:08:06.757943  473081 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 16:08:06.872338  473081 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 16:08:06.872418  473081 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 16:08:06.876829  473081 start.go:564] Will wait 60s for crictl version
	I1202 16:08:06.876895  473081 ssh_runner.go:195] Run: which crictl
	I1202 16:08:06.881154  473081 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1202 16:08:06.922061  473081 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1202 16:08:06.922145  473081 ssh_runner.go:195] Run: crio --version
	I1202 16:08:06.970124  473081 ssh_runner.go:195] Run: crio --version
	I1202 16:08:07.010620  473081 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.24.6 ...
	I1202 16:08:04.982941  474687 out.go:252] * Updating the running docker "pause-907557" container ...
	I1202 16:08:04.982988  474687 machine.go:94] provisionDockerMachine start ...
	I1202 16:08:04.983071  474687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-907557
	I1202 16:08:05.008214  474687 main.go:143] libmachine: Using SSH client type: native
	I1202 16:08:05.008693  474687 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1202 16:08:05.008714  474687 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 16:08:05.189663  474687 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-907557
	
	I1202 16:08:05.189699  474687 ubuntu.go:182] provisioning hostname "pause-907557"
	I1202 16:08:05.189865  474687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-907557
	I1202 16:08:05.221835  474687 main.go:143] libmachine: Using SSH client type: native
	I1202 16:08:05.222220  474687 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1202 16:08:05.222286  474687 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-907557 && echo "pause-907557" | sudo tee /etc/hostname
	I1202 16:08:05.410252  474687 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-907557
	
	I1202 16:08:05.410332  474687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-907557
	I1202 16:08:05.435758  474687 main.go:143] libmachine: Using SSH client type: native
	I1202 16:08:05.436408  474687 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1202 16:08:05.436448  474687 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-907557' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-907557/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-907557' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 16:08:05.602825  474687 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 16:08:05.602867  474687 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-264555/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-264555/.minikube}
	I1202 16:08:05.602901  474687 ubuntu.go:190] setting up certificates
	I1202 16:08:05.602913  474687 provision.go:84] configureAuth start
	I1202 16:08:05.602977  474687 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-907557
	I1202 16:08:05.627502  474687 provision.go:143] copyHostCerts
	I1202 16:08:05.627569  474687 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem, removing ...
	I1202 16:08:05.627583  474687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem
	I1202 16:08:05.627668  474687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem (1082 bytes)
	I1202 16:08:05.627792  474687 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem, removing ...
	I1202 16:08:05.627803  474687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem
	I1202 16:08:05.627839  474687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem (1123 bytes)
	I1202 16:08:05.627922  474687 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem, removing ...
	I1202 16:08:05.627931  474687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem
	I1202 16:08:05.627963  474687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem (1675 bytes)
	I1202 16:08:05.628035  474687 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem org=jenkins.pause-907557 san=[127.0.0.1 192.168.85.2 localhost minikube pause-907557]
	I1202 16:08:05.661184  474687 provision.go:177] copyRemoteCerts
	I1202 16:08:05.661254  474687 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 16:08:05.661297  474687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-907557
	I1202 16:08:05.687062  474687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/pause-907557/id_rsa Username:docker}
	I1202 16:08:05.798494  474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 16:08:05.817933  474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1202 16:08:05.839154  474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 16:08:05.858902  474687 provision.go:87] duration metric: took 255.972432ms to configureAuth
	I1202 16:08:05.858935  474687 ubuntu.go:206] setting minikube options for container-runtime
	I1202 16:08:05.859198  474687 config.go:182] Loaded profile config "pause-907557": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 16:08:05.859316  474687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-907557
	I1202 16:08:05.880766  474687 main.go:143] libmachine: Using SSH client type: native
	I1202 16:08:05.881086  474687 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1202 16:08:05.881111  474687 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 16:08:06.256180  474687 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 16:08:06.256211  474687 machine.go:97] duration metric: took 1.273213936s to provisionDockerMachine
	I1202 16:08:06.256227  474687 start.go:293] postStartSetup for "pause-907557" (driver="docker")
	I1202 16:08:06.256243  474687 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 16:08:06.256317  474687 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 16:08:06.256373  474687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-907557
	I1202 16:08:06.282408  474687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/pause-907557/id_rsa Username:docker}
	I1202 16:08:06.390079  474687 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 16:08:06.395016  474687 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 16:08:06.395046  474687 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 16:08:06.395057  474687 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-264555/.minikube/addons for local assets ...
	I1202 16:08:06.395115  474687 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-264555/.minikube/files for local assets ...
	I1202 16:08:06.395200  474687 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem -> 2680992.pem in /etc/ssl/certs
	I1202 16:08:06.395318  474687 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 16:08:06.403514  474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem --> /etc/ssl/certs/2680992.pem (1708 bytes)
	I1202 16:08:06.421361  474687 start.go:296] duration metric: took 165.11158ms for postStartSetup
	I1202 16:08:06.421461  474687 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 16:08:06.421510  474687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-907557
	I1202 16:08:06.443969  474687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/pause-907557/id_rsa Username:docker}
	I1202 16:08:06.556968  474687 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 16:08:06.563016  474687 fix.go:56] duration metric: took 1.62016321s for fixHost
	I1202 16:08:06.563043  474687 start.go:83] releasing machines lock for "pause-907557", held for 1.620247106s
	I1202 16:08:06.563109  474687 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-907557
	I1202 16:08:06.585223  474687 ssh_runner.go:195] Run: cat /version.json
	I1202 16:08:06.585287  474687 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 16:08:06.585304  474687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-907557
	I1202 16:08:06.585385  474687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-907557
	I1202 16:08:06.607589  474687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/pause-907557/id_rsa Username:docker}
	I1202 16:08:06.608880  474687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/pause-907557/id_rsa Username:docker}
	I1202 16:08:06.775414  474687 ssh_runner.go:195] Run: systemctl --version
	I1202 16:08:06.782792  474687 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 16:08:06.826227  474687 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 16:08:06.831854  474687 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 16:08:06.831930  474687 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 16:08:06.841100  474687 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 16:08:06.841133  474687 start.go:496] detecting cgroup driver to use...
	I1202 16:08:06.841176  474687 detect.go:190] detected "systemd" cgroup driver on host os
	I1202 16:08:06.841220  474687 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 16:08:06.859500  474687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 16:08:06.873485  474687 docker.go:218] disabling cri-docker service (if available) ...
	I1202 16:08:06.873549  474687 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 16:08:06.890738  474687 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 16:08:06.905557  474687 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 16:08:07.040214  474687 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 16:08:07.163352  474687 docker.go:234] disabling docker service ...
	I1202 16:08:07.163414  474687 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 16:08:07.180905  474687 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 16:08:07.195391  474687 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 16:08:07.316560  474687 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 16:08:07.463055  474687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 16:08:07.481721  474687 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 16:08:07.502642  474687 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 16:08:07.502717  474687 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:08:07.513328  474687 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1202 16:08:07.513401  474687 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:08:07.524560  474687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:08:07.534291  474687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:08:07.547686  474687 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 16:08:07.558903  474687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:08:07.570344  474687 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:08:07.579417  474687 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:08:07.589555  474687 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 16:08:07.598498  474687 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 16:08:07.608586  474687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 16:08:07.739731  474687 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 16:08:07.942971  474687 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 16:08:07.943045  474687 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 16:08:07.948155  474687 start.go:564] Will wait 60s for crictl version
	I1202 16:08:07.948232  474687 ssh_runner.go:195] Run: which crictl
	I1202 16:08:07.953310  474687 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 16:08:07.986055  474687 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 16:08:07.986144  474687 ssh_runner.go:195] Run: crio --version
	I1202 16:08:08.026605  474687 ssh_runner.go:195] Run: crio --version
	I1202 16:08:08.083009  474687 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 16:08:04.326961  472164 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22021-264555/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-881462:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir: (4.026194366s)
	I1202 16:08:04.326993  472164 kic.go:203] duration metric: took 4.026374721s to extract preloaded images to volume ...
	W1202 16:08:04.327092  472164 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1202 16:08:04.327124  472164 oci.go:249] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1202 16:08:04.327182  472164 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1202 16:08:04.390144  472164 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-881462 --name missing-upgrade-881462 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-881462 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-881462 --network missing-upgrade-881462 --ip 192.168.76.2 --volume missing-upgrade-881462:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279
	I1202 16:08:04.743168  472164 cli_runner.go:164] Run: docker container inspect missing-upgrade-881462 --format={{.State.Running}}
	I1202 16:08:04.777402  472164 cli_runner.go:164] Run: docker container inspect missing-upgrade-881462 --format={{.State.Status}}
	I1202 16:08:04.809004  472164 cli_runner.go:164] Run: docker exec missing-upgrade-881462 stat /var/lib/dpkg/alternatives/iptables
	I1202 16:08:04.891570  472164 oci.go:144] the created container "missing-upgrade-881462" has a running status.
	I1202 16:08:04.891606  472164 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22021-264555/.minikube/machines/missing-upgrade-881462/id_rsa...
	I1202 16:08:05.060364  472164 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22021-264555/.minikube/machines/missing-upgrade-881462/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1202 16:08:05.095893  472164 cli_runner.go:164] Run: docker container inspect missing-upgrade-881462 --format={{.State.Status}}
	I1202 16:08:05.127538  472164 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1202 16:08:05.127553  472164 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-881462 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1202 16:08:05.204063  472164 cli_runner.go:164] Run: docker container inspect missing-upgrade-881462 --format={{.State.Status}}
	I1202 16:08:05.235185  472164 machine.go:93] provisionDockerMachine start ...
	I1202 16:08:05.235276  472164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-881462
	I1202 16:08:05.268280  472164 main.go:141] libmachine: Using SSH client type: native
	I1202 16:08:05.268673  472164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1202 16:08:05.268685  472164 main.go:141] libmachine: About to run SSH command:
	hostname
	I1202 16:08:05.418615  472164 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-881462
	
	I1202 16:08:05.418637  472164 ubuntu.go:169] provisioning hostname "missing-upgrade-881462"
	I1202 16:08:05.418712  472164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-881462
	I1202 16:08:05.443838  472164 main.go:141] libmachine: Using SSH client type: native
	I1202 16:08:05.444137  472164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1202 16:08:05.444149  472164 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-881462 && echo "missing-upgrade-881462" | sudo tee /etc/hostname
	I1202 16:08:05.607615  472164 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-881462
	
	I1202 16:08:05.607698  472164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-881462
	I1202 16:08:05.631992  472164 main.go:141] libmachine: Using SSH client type: native
	I1202 16:08:05.632240  472164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1202 16:08:05.632265  472164 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-881462' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-881462/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-881462' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 16:08:05.774559  472164 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 16:08:05.774583  472164 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/22021-264555/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-264555/.minikube}
	I1202 16:08:05.774632  472164 ubuntu.go:177] setting up certificates
	I1202 16:08:05.774648  472164 provision.go:84] configureAuth start
	I1202 16:08:05.774743  472164 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-881462
	I1202 16:08:05.798524  472164 provision.go:143] copyHostCerts
	I1202 16:08:05.798686  472164 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem, removing ...
	I1202 16:08:05.798698  472164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem
	I1202 16:08:05.798888  472164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem (1082 bytes)
	I1202 16:08:05.799054  472164 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem, removing ...
	I1202 16:08:05.799066  472164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem
	I1202 16:08:05.799116  472164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem (1123 bytes)
	I1202 16:08:05.799205  472164 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem, removing ...
	I1202 16:08:05.799212  472164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem
	I1202 16:08:05.799247  472164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem (1675 bytes)
	I1202 16:08:05.799318  472164 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-881462 san=[127.0.0.1 192.168.76.2 localhost minikube missing-upgrade-881462]
	I1202 16:08:06.057617  472164 provision.go:177] copyRemoteCerts
	I1202 16:08:06.057664  472164 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 16:08:06.057697  472164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-881462
	I1202 16:08:06.083556  472164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/missing-upgrade-881462/id_rsa Username:docker}
	I1202 16:08:06.185403  472164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 16:08:06.220630  472164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1202 16:08:06.249591  472164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 16:08:06.284995  472164 provision.go:87] duration metric: took 510.33204ms to configureAuth
	I1202 16:08:06.285022  472164 ubuntu.go:193] setting minikube options for container-runtime
	I1202 16:08:06.285240  472164 config.go:182] Loaded profile config "missing-upgrade-881462": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1202 16:08:06.285396  472164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-881462
	I1202 16:08:06.306649  472164 main.go:141] libmachine: Using SSH client type: native
	I1202 16:08:06.306875  472164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1202 16:08:06.306891  472164 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 16:08:06.575405  472164 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 16:08:06.575437  472164 machine.go:96] duration metric: took 1.340219581s to provisionDockerMachine
	I1202 16:08:06.575449  472164 client.go:171] duration metric: took 7.436626322s to LocalClient.Create
	I1202 16:08:06.575470  472164 start.go:167] duration metric: took 7.436688509s to libmachine.API.Create "missing-upgrade-881462"
	I1202 16:08:06.575476  472164 start.go:293] postStartSetup for "missing-upgrade-881462" (driver="docker")
	I1202 16:08:06.575485  472164 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 16:08:06.575539  472164 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 16:08:06.575575  472164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-881462
	I1202 16:08:06.598831  472164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/missing-upgrade-881462/id_rsa Username:docker}
	I1202 16:08:06.698199  472164 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 16:08:06.702307  472164 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 16:08:06.702346  472164 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1202 16:08:06.702353  472164 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1202 16:08:06.702358  472164 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1202 16:08:06.702370  472164 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-264555/.minikube/addons for local assets ...
	I1202 16:08:06.702469  472164 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-264555/.minikube/files for local assets ...
	I1202 16:08:06.702566  472164 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem -> 2680992.pem in /etc/ssl/certs
	I1202 16:08:06.702708  472164 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 16:08:06.718080  472164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem --> /etc/ssl/certs/2680992.pem (1708 bytes)
	I1202 16:08:06.749529  472164 start.go:296] duration metric: took 174.036223ms for postStartSetup
	I1202 16:08:06.750012  472164 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-881462
	I1202 16:08:06.770082  472164 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/config.json ...
	I1202 16:08:06.770380  472164 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 16:08:06.770438  472164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-881462
	I1202 16:08:06.790968  472164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/missing-upgrade-881462/id_rsa Username:docker}
	I1202 16:08:06.883900  472164 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 16:08:06.889016  472164 start.go:128] duration metric: took 7.754169944s to createHost
	I1202 16:08:06.889038  472164 start.go:83] releasing machines lock for "missing-upgrade-881462", held for 7.754327934s
	I1202 16:08:06.889122  472164 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-881462
	I1202 16:08:06.909482  472164 ssh_runner.go:195] Run: cat /version.json
	I1202 16:08:06.909506  472164 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 16:08:06.909529  472164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-881462
	I1202 16:08:06.909591  472164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-881462
	I1202 16:08:06.931103  472164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/missing-upgrade-881462/id_rsa Username:docker}
	I1202 16:08:06.932339  472164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/missing-upgrade-881462/id_rsa Username:docker}
	I1202 16:08:07.118787  472164 ssh_runner.go:195] Run: systemctl --version
	I1202 16:08:07.123895  472164 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 16:08:07.269545  472164 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1202 16:08:07.274280  472164 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 16:08:07.298806  472164 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1202 16:08:07.298889  472164 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 16:08:07.339796  472164 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1202 16:08:07.339814  472164 start.go:495] detecting cgroup driver to use...
	I1202 16:08:07.339856  472164 detect.go:190] detected "systemd" cgroup driver on host os
	I1202 16:08:07.339909  472164 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 16:08:07.365626  472164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 16:08:07.380026  472164 docker.go:217] disabling cri-docker service (if available) ...
	I1202 16:08:07.380067  472164 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 16:08:07.397682  472164 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 16:08:07.416842  472164 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 16:08:07.508273  472164 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 16:08:07.591760  472164 docker.go:233] disabling docker service ...
	I1202 16:08:07.591824  472164 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 16:08:07.613065  472164 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 16:08:07.626898  472164 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 16:08:07.710254  472164 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 16:08:07.840612  472164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 16:08:07.854785  472164 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 16:08:07.874703  472164 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1202 16:08:07.874764  472164 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:08:07.888989  472164 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1202 16:08:07.889042  472164 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:08:07.900565  472164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:08:07.914406  472164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:08:07.927501  472164 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 16:08:07.939016  472164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:08:07.951536  472164 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:08:07.977006  472164 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:08:07.990798  472164 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 16:08:08.005239  472164 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 16:08:08.019767  472164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 16:08:08.180053  472164 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 16:08:08.282078  472164 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 16:08:08.282133  472164 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 16:08:08.286260  472164 start.go:563] Will wait 60s for crictl version
	I1202 16:08:08.286324  472164 ssh_runner.go:195] Run: which crictl
	I1202 16:08:08.290897  472164 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1202 16:08:08.331093  472164 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1202 16:08:08.331151  472164 ssh_runner.go:195] Run: crio --version
	I1202 16:08:08.375458  472164 ssh_runner.go:195] Run: crio --version
	I1202 16:08:08.423553  472164 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.24.6 ...
	I1202 16:08:03.863736  472154 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 16:08:03.883779  472154 ssh_runner.go:195] Run: openssl version
	I1202 16:08:03.889983  472154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2680992.pem && ln -fs /usr/share/ca-certificates/2680992.pem /etc/ssl/certs/2680992.pem"
	I1202 16:08:03.900808  472154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2680992.pem
	I1202 16:08:03.904758  472154 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 15:33 /usr/share/ca-certificates/2680992.pem
	I1202 16:08:03.904819  472154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2680992.pem
	I1202 16:08:03.912079  472154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2680992.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 16:08:03.922300  472154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 16:08:03.932880  472154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:08:03.936697  472154 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 15:16 /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:08:03.936760  472154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:08:03.943805  472154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 16:08:03.954020  472154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/268099.pem && ln -fs /usr/share/ca-certificates/268099.pem /etc/ssl/certs/268099.pem"
	I1202 16:08:03.964821  472154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/268099.pem
	I1202 16:08:03.968715  472154 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 15:33 /usr/share/ca-certificates/268099.pem
	I1202 16:08:03.968788  472154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/268099.pem
	I1202 16:08:03.976251  472154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/268099.pem /etc/ssl/certs/51391683.0"
	I1202 16:08:03.986284  472154 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 16:08:03.990280  472154 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 16:08:03.997288  472154 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 16:08:04.004495  472154 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 16:08:04.011788  472154 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 16:08:04.018538  472154 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 16:08:04.025258  472154 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 16:08:04.039079  472154 kubeadm.go:401] StartCluster: {Name:running-upgrade-136818 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:running-upgrade-136818 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 16:08:04.039163  472154 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 16:08:04.039226  472154 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 16:08:04.081169  472154 cri.go:89] found id: "903da8d09c9e6cda86061dca786e6a99ceccaa240d6ae1caa073a6c8e2ddc710"
	I1202 16:08:04.081192  472154 cri.go:89] found id: "b3cf1c8a165609837ab3dcc2484db211f1289eb9490c7384eed87b91d88282d8"
	I1202 16:08:04.081196  472154 cri.go:89] found id: "498eb449fe529c0dee1e99f4fb5244bb816db07782983b197d065610e40f0f9b"
	I1202 16:08:04.081199  472154 cri.go:89] found id: "855c0ebf3bc431a4ca21a5e52e7b6c9867ade7c5db9b878c7587555d84b1bc07"
	I1202 16:08:04.081202  472154 cri.go:89] found id: ""
	I1202 16:08:04.081239  472154 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 16:08:04.101216  472154 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"498eb449fe529c0dee1e99f4fb5244bb816db07782983b197d065610e40f0f9b","pid":1389,"status":"running","bundle":"/run/containers/storage/overlay-containers/498eb449fe529c0dee1e99f4fb5244bb816db07782983b197d065610e40f0f9b/userdata","rootfs":"/var/lib/containers/storage/overlay/043b20234f45aa9c720cc9109a5364484d15d9370bbb576096d5f1eed88c10f6/merged","created":"2025-12-02T16:07:52.005955863Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"bf915d6a","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"bf915d6a\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.termina
tionMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"498eb449fe529c0dee1e99f4fb5244bb816db07782983b197d065610e40f0f9b","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-12-02T16:07:51.913973331Z","io.kubernetes.cri-o.Image":"c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.32.0","io.kubernetes.cri-o.ImageRef":"c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-running-upgrade-136818\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"1ba1d3cf2a4b6df642811bd2326b893f\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-running-upgrade-136818_1ba1d3cf2a4b6df642811bd2326b893f/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube
-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/043b20234f45aa9c720cc9109a5364484d15d9370bbb576096d5f1eed88c10f6/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-running-upgrade-136818_kube-system_1ba1d3cf2a4b6df642811bd2326b893f_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/effba5c89ba9a0f4077811a3531e632452986c518151590b36020b61a02d32f9/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"effba5c89ba9a0f4077811a3531e632452986c518151590b36020b61a02d32f9","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-running-upgrade-136818_kube-system_1ba1d3cf2a4b6df642811bd2326b893f_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/1ba1d3cf2a4b6df642811bd2326b893f/containers/kube-apiserver/3dcd7607\",\"read
only\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/1ba1d3cf2a4b6df642811bd2326b893f/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.po
d.name":"kube-apiserver-running-upgrade-136818","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"1ba1d3cf2a4b6df642811bd2326b893f","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.94.2:8443","kubernetes.io/config.hash":"1ba1d3cf2a4b6df642811bd2326b893f","kubernetes.io/config.seen":"2025-12-02T16:07:51.422586711Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"855c0ebf3bc431a4ca21a5e52e7b6c9867ade7c5db9b878c7587555d84b1bc07","pid":1391,"status":"running","bundle":"/run/containers/storage/overlay-containers/855c0ebf3bc431a4ca21a5e52e7b6c9867ade7c5db9b878c7587555d84b1bc07/userdata","rootfs":"/var/lib/containers/storage/overlay/9410d13aee82ad90a4483f54945e6342423e6583fb127d
17b19dada4d317936f/merged","created":"2025-12-02T16:07:52.004847589Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e68be80f","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e68be80f\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"855c0ebf3bc431a4ca21a5e52e7b6c9867ade7c5db9b878c7587555d84b1bc07","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-12-02T16:07:51.910375348Z","io.kubernetes.cri-o.Image":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","io.kubernetes.cri-o.ImageN
ame":"registry.k8s.io/etcd:3.5.16-0","io.kubernetes.cri-o.ImageRef":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-running-upgrade-136818\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"6030e2b29200be865f9696b591299ad5\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-running-upgrade-136818_6030e2b29200be865f9696b591299ad5/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9410d13aee82ad90a4483f54945e6342423e6583fb127d17b19dada4d317936f/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-running-upgrade-136818_kube-system_6030e2b29200be865f9696b591299ad5_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/269a5e7a6f62bbdd46dfae8bf3b9f5b1e5a15ce92015411e0514d9dca4caa8b0/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"269a5e7a
6f62bbdd46dfae8bf3b9f5b1e5a15ce92015411e0514d9dca4caa8b0","io.kubernetes.cri-o.SandboxName":"k8s_etcd-running-upgrade-136818_kube-system_6030e2b29200be865f9696b591299ad5_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/6030e2b29200be865f9696b591299ad5/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/6030e2b29200be865f9696b591299ad5/containers/etcd/82d7a8e2\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"pro
pagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-running-upgrade-136818","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"6030e2b29200be865f9696b591299ad5","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.94.2:2379","kubernetes.io/config.hash":"6030e2b29200be865f9696b591299ad5","kubernetes.io/config.seen":"2025-12-02T16:07:51.422583509Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"903da8d09c9e6cda86061dca786e6a99ceccaa240d6ae1caa073a6c8e2ddc710","pid":1414,"status":"running","bundle":"/run/containers/storage/overlay-containers/903da8d09c9e6cda86061dca786e6a99ceccaa240d6ae1caa073a6c8e2ddc710/userdata","rootfs":"/var/lib/containers/storage/overlay/64e3
7b1d137fddcff0afd0be893be14b859aebbee1f5199c5f079955e2ad8854/merged","created":"2025-12-02T16:07:52.015503671Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"8c4b12d6","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"8c4b12d6\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"903da8d09c9e6cda86061dca786e6a99ceccaa240d6ae1caa073a6c8e2ddc710","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-12-02T16:07:51.933247921Z","io.kubernetes.cri-o.Image":"a389e107f4ff1130c69849f0af08cbce9a1dfe3b7
c39874012587d233807cfc5","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.32.0","io.kubernetes.cri-o.ImageRef":"a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-running-upgrade-136818\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"c6d5dc30749655fbc404edf02e486cfd\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-running-upgrade-136818_c6d5dc30749655fbc404edf02e486cfd/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/64e37b1d137fddcff0afd0be893be14b859aebbee1f5199c5f079955e2ad8854/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-running-upgrade-136818_kube-system_c6d5dc30749655fbc404edf02e486cfd_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containe
rs/92506e23b0528ddd0771accd63d18e720bcfc96ba4c05a7cb6a0ede05e6caf6d/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"92506e23b0528ddd0771accd63d18e720bcfc96ba4c05a7cb6a0ede05e6caf6d","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-running-upgrade-136818_kube-system_c6d5dc30749655fbc404edf02e486cfd_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/c6d5dc30749655fbc404edf02e486cfd/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/c6d5dc30749655fbc404edf02e486cfd/containers/kube-scheduler/cbb3ce83\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"p
ropagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-running-upgrade-136818","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"c6d5dc30749655fbc404edf02e486cfd","kubernetes.io/config.hash":"c6d5dc30749655fbc404edf02e486cfd","kubernetes.io/config.seen":"2025-12-02T16:07:51.422588600Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b3cf1c8a165609837ab3dcc2484db211f1289eb9490c7384eed87b91d88282d8","pid":1400,"status":"running","bundle":"/run/containers/storage/overlay-containers/b3cf1c8a165609837ab3dcc2484db211f1289eb9490c7384eed87b91d88282d8/userdata","rootfs":"/var/lib/containers/storage/overlay/efddfd7a0dd968fc062cb9f55311672abdfaf164ec1d0056c62a923f61941d67/merged
","created":"2025-12-02T16:07:52.008235424Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"99f3a73e","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"99f3a73e\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"b3cf1c8a165609837ab3dcc2484db211f1289eb9490c7384eed87b91d88282d8","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-12-02T16:07:51.923060165Z","io.kubernetes.cri-o.Image":"8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3","io.kubernetes.cri-o.ImageName":"
registry.k8s.io/kube-controller-manager:v1.32.0","io.kubernetes.cri-o.ImageRef":"8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-running-upgrade-136818\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"b0e6c322806f264493e567f0fb779c4e\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-running-upgrade-136818_b0e6c322806f264493e567f0fb779c4e/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/efddfd7a0dd968fc062cb9f55311672abdfaf164ec1d0056c62a923f61941d67/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-running-upgrade-136818_kube-system_b0e6c322806f264493e567f0fb779c4e_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/ov
erlay-containers/c5b6912aa33aba924cd88ac7a7d854a164efde95bfc50a0f280f2d434e5a1fa4/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"c5b6912aa33aba924cd88ac7a7d854a164efde95bfc50a0f280f2d434e5a1fa4","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-running-upgrade-136818_kube-system_b0e6c322806f264493e567f0fb779c4e_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/b0e6c322806f264493e567f0fb779c4e/containers/kube-controller-manager/bdda7d31\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/b0e6c322806f264493e567f0fb779c4e/etc-hosts\",\"readonly
\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"
propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-running-upgrade-136818","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b0e6c322806f264493e567f0fb779c4e","kubernetes.io/config.hash":"b0e6c322806f264493e567f0fb779c4e","kubernetes.io/config.seen":"2025-12-02T16:07:51.422587689Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"}]
	I1202 16:08:04.101538  472154 cri.go:126] list returned 4 containers
	I1202 16:08:04.101561  472154 cri.go:129] container: {ID:498eb449fe529c0dee1e99f4fb5244bb816db07782983b197d065610e40f0f9b Status:running}
	I1202 16:08:04.101605  472154 cri.go:135] skipping {498eb449fe529c0dee1e99f4fb5244bb816db07782983b197d065610e40f0f9b running}: state = "running", want "paused"
	I1202 16:08:04.101622  472154 cri.go:129] container: {ID:855c0ebf3bc431a4ca21a5e52e7b6c9867ade7c5db9b878c7587555d84b1bc07 Status:running}
	I1202 16:08:04.101630  472154 cri.go:135] skipping {855c0ebf3bc431a4ca21a5e52e7b6c9867ade7c5db9b878c7587555d84b1bc07 running}: state = "running", want "paused"
	I1202 16:08:04.101638  472154 cri.go:129] container: {ID:903da8d09c9e6cda86061dca786e6a99ceccaa240d6ae1caa073a6c8e2ddc710 Status:running}
	I1202 16:08:04.101647  472154 cri.go:135] skipping {903da8d09c9e6cda86061dca786e6a99ceccaa240d6ae1caa073a6c8e2ddc710 running}: state = "running", want "paused"
	I1202 16:08:04.101661  472154 cri.go:129] container: {ID:b3cf1c8a165609837ab3dcc2484db211f1289eb9490c7384eed87b91d88282d8 Status:running}
	I1202 16:08:04.101672  472154 cri.go:135] skipping {b3cf1c8a165609837ab3dcc2484db211f1289eb9490c7384eed87b91d88282d8 running}: state = "running", want "paused"
	I1202 16:08:04.101728  472154 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 16:08:04.112158  472154 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 16:08:04.112182  472154 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 16:08:04.112231  472154 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 16:08:04.122569  472154 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 16:08:04.123111  472154 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-136818" does not appear in /home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 16:08:04.123347  472154 kubeconfig.go:62] /home/jenkins/minikube-integration/22021-264555/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-136818" cluster setting kubeconfig missing "running-upgrade-136818" context setting]
	I1202 16:08:04.123818  472154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/kubeconfig: {Name:mk809d3f43352510256b48d000241cc8ee13f80d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:08:04.181776  472154 kapi.go:59] client config for running-upgrade-136818: &rest.Config{Host:"https://192.168.94.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-264555/.minikube/profiles/running-upgrade-136818/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-264555/.minikube/profiles/running-upgrade-136818/client.key", CAFile:"/home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2815480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 16:08:04.182189  472154 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1202 16:08:04.182204  472154 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1202 16:08:04.182209  472154 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1202 16:08:04.182213  472154 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1202 16:08:04.182217  472154 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1202 16:08:04.182736  472154 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 16:08:04.195187  472154 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-02 16:07:47.631970009 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-02 16:08:03.365177853 +0000
	@@ -41,9 +41,6 @@
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      - name: "proxy-refresh-interval"
	-        value: "70000"
	 kubernetesVersion: v1.32.0
	 networking:
	   dnsDomain: cluster.local
	
	-- /stdout --
	I1202 16:08:04.195212  472154 kubeadm.go:1161] stopping kube-system containers ...
	I1202 16:08:04.195228  472154 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1202 16:08:04.195285  472154 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 16:08:04.234505  472154 cri.go:89] found id: "903da8d09c9e6cda86061dca786e6a99ceccaa240d6ae1caa073a6c8e2ddc710"
	I1202 16:08:04.234531  472154 cri.go:89] found id: "b3cf1c8a165609837ab3dcc2484db211f1289eb9490c7384eed87b91d88282d8"
	I1202 16:08:04.234537  472154 cri.go:89] found id: "498eb449fe529c0dee1e99f4fb5244bb816db07782983b197d065610e40f0f9b"
	I1202 16:08:04.234541  472154 cri.go:89] found id: "855c0ebf3bc431a4ca21a5e52e7b6c9867ade7c5db9b878c7587555d84b1bc07"
	I1202 16:08:04.234549  472154 cri.go:89] found id: ""
	I1202 16:08:04.234556  472154 cri.go:252] Stopping containers: [903da8d09c9e6cda86061dca786e6a99ceccaa240d6ae1caa073a6c8e2ddc710 b3cf1c8a165609837ab3dcc2484db211f1289eb9490c7384eed87b91d88282d8 498eb449fe529c0dee1e99f4fb5244bb816db07782983b197d065610e40f0f9b 855c0ebf3bc431a4ca21a5e52e7b6c9867ade7c5db9b878c7587555d84b1bc07]
	I1202 16:08:04.234620  472154 ssh_runner.go:195] Run: which crictl
	I1202 16:08:04.238597  472154 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 903da8d09c9e6cda86061dca786e6a99ceccaa240d6ae1caa073a6c8e2ddc710 b3cf1c8a165609837ab3dcc2484db211f1289eb9490c7384eed87b91d88282d8 498eb449fe529c0dee1e99f4fb5244bb816db07782983b197d065610e40f0f9b 855c0ebf3bc431a4ca21a5e52e7b6c9867ade7c5db9b878c7587555d84b1bc07
	I1202 16:08:08.424935  472164 cli_runner.go:164] Run: docker network inspect missing-upgrade-881462 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 16:08:08.447239  472164 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1202 16:08:08.451700  472164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 16:08:08.464689  472164 kubeadm.go:883] updating cluster {Name:missing-upgrade-881462 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-881462 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 16:08:08.464827  472164 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1202 16:08:08.464887  472164 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 16:08:08.556304  472164 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 16:08:08.556321  472164 crio.go:433] Images already preloaded, skipping extraction
	I1202 16:08:08.556383  472164 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 16:08:08.595970  472164 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 16:08:08.595985  472164 cache_images.go:84] Images are preloaded, skipping loading
	I1202 16:08:08.595993  472164 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.32.0 crio true true} ...
	I1202 16:08:08.596104  472164 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=missing-upgrade-881462 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-881462 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 16:08:08.596171  472164 ssh_runner.go:195] Run: crio config
	I1202 16:08:08.644903  472164 cni.go:84] Creating CNI manager for ""
	I1202 16:08:08.644915  472164 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 16:08:08.644924  472164 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1202 16:08:08.644944  472164 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:missing-upgrade-881462 NodeName:missing-upgrade-881462 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 16:08:08.645083  472164 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "missing-upgrade-881462"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 16:08:08.645139  472164 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1202 16:08:08.655937  472164 binaries.go:44] Found k8s binaries, skipping transfer
	I1202 16:08:08.656009  472164 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 16:08:08.665704  472164 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1202 16:08:08.686546  472164 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 16:08:08.710808  472164 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1202 16:08:08.731488  472164 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1202 16:08:08.735703  472164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 16:08:08.748326  472164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 16:08:08.815213  472164 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 16:08:08.842610  472164 certs.go:68] Setting up /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462 for IP: 192.168.76.2
	I1202 16:08:08.842629  472164 certs.go:194] generating shared ca certs ...
	I1202 16:08:08.842651  472164 certs.go:226] acquiring lock for ca certs: {Name:mk039ff27816ff98157f54038cc23b17e408fc34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:08:08.842821  472164 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-264555/.minikube/ca.key
	I1202 16:08:08.842874  472164 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.key
	I1202 16:08:08.842883  472164 certs.go:256] generating profile certs ...
	I1202 16:08:08.842956  472164 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/client.key
	I1202 16:08:08.842979  472164 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/client.crt with IP's: []
	I1202 16:08:08.084585  474687 cli_runner.go:164] Run: docker network inspect pause-907557 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 16:08:08.110043  474687 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1202 16:08:08.116234  474687 kubeadm.go:884] updating cluster {Name:pause-907557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-907557 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 16:08:08.116572  474687 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 16:08:08.116638  474687 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 16:08:08.154766  474687 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 16:08:08.154789  474687 crio.go:433] Images already preloaded, skipping extraction
	I1202 16:08:08.154847  474687 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 16:08:08.185342  474687 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 16:08:08.185364  474687 cache_images.go:86] Images are preloaded, skipping loading
	I1202 16:08:08.185373  474687 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1202 16:08:08.185533  474687 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-907557 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-907557 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 16:08:08.185630  474687 ssh_runner.go:195] Run: crio config
	I1202 16:08:08.238539  474687 cni.go:84] Creating CNI manager for ""
	I1202 16:08:08.238561  474687 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 16:08:08.238575  474687 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 16:08:08.238597  474687 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-907557 NodeName:pause-907557 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 16:08:08.238726  474687 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-907557"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 16:08:08.238795  474687 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 16:08:08.248934  474687 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 16:08:08.248991  474687 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 16:08:08.257721  474687 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1202 16:08:08.273331  474687 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 16:08:08.289452  474687 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1202 16:08:08.304194  474687 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1202 16:08:08.309148  474687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 16:08:08.428243  474687 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 16:08:08.445729  474687 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/pause-907557 for IP: 192.168.85.2
	I1202 16:08:08.445750  474687 certs.go:195] generating shared ca certs ...
	I1202 16:08:08.445771  474687 certs.go:227] acquiring lock for ca certs: {Name:mk039ff27816ff98157f54038cc23b17e408fc34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:08:08.445933  474687 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-264555/.minikube/ca.key
	I1202 16:08:08.445992  474687 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.key
	I1202 16:08:08.446008  474687 certs.go:257] generating profile certs ...
	I1202 16:08:08.446122  474687 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/pause-907557/client.key
	I1202 16:08:08.446191  474687 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/pause-907557/apiserver.key.7fab9d41
	I1202 16:08:08.446244  474687 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/pause-907557/proxy-client.key
	I1202 16:08:08.446382  474687 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099.pem (1338 bytes)
	W1202 16:08:08.446453  474687 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099_empty.pem, impossibly tiny 0 bytes
	I1202 16:08:08.446468  474687 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem (1679 bytes)
	I1202 16:08:08.446508  474687 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem (1082 bytes)
	I1202 16:08:08.446551  474687 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem (1123 bytes)
	I1202 16:08:08.446590  474687 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem (1675 bytes)
	I1202 16:08:08.446664  474687 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem (1708 bytes)
	I1202 16:08:08.447352  474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 16:08:08.468682  474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 16:08:08.490407  474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 16:08:08.510792  474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 16:08:08.529893  474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/pause-907557/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1202 16:08:08.549638  474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/pause-907557/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 16:08:08.569673  474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/pause-907557/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 16:08:08.590413  474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/pause-907557/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 16:08:08.611782  474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem --> /usr/share/ca-certificates/2680992.pem (1708 bytes)
	I1202 16:08:08.632273  474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 16:08:08.652925  474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099.pem --> /usr/share/ca-certificates/268099.pem (1338 bytes)
	I1202 16:08:08.672220  474687 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 16:08:08.687006  474687 ssh_runner.go:195] Run: openssl version
	I1202 16:08:08.693871  474687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2680992.pem && ln -fs /usr/share/ca-certificates/2680992.pem /etc/ssl/certs/2680992.pem"
	I1202 16:08:08.702892  474687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2680992.pem
	I1202 16:08:08.707221  474687 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 15:33 /usr/share/ca-certificates/2680992.pem
	I1202 16:08:08.707289  474687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2680992.pem
	I1202 16:08:08.744793  474687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2680992.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 16:08:08.753835  474687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 16:08:08.763880  474687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:08:08.768860  474687 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 15:16 /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:08:08.768930  474687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:08:08.808826  474687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 16:08:08.817883  474687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/268099.pem && ln -fs /usr/share/ca-certificates/268099.pem /etc/ssl/certs/268099.pem"
	I1202 16:08:08.826971  474687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/268099.pem
	I1202 16:08:08.831227  474687 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 15:33 /usr/share/ca-certificates/268099.pem
	I1202 16:08:08.831288  474687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/268099.pem
	I1202 16:08:08.878371  474687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/268099.pem /etc/ssl/certs/51391683.0"
	I1202 16:08:08.889105  474687 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 16:08:08.894003  474687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 16:08:08.930584  474687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 16:08:08.967913  474687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 16:08:09.005637  474687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 16:08:09.042326  474687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 16:08:09.078094  474687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 16:08:09.117708  474687 kubeadm.go:401] StartCluster: {Name:pause-907557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-907557 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 16:08:09.117856  474687 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 16:08:09.117916  474687 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 16:08:09.149043  474687 cri.go:89] found id: "b2836aceeb8807e0993320e05f6aa6c4be7c30aaaa190092f8e98f5f7dd646ec"
	I1202 16:08:09.149084  474687 cri.go:89] found id: "34828dad597db079c97a036969df0740139e6fd38885ad5627968129aef7c2b3"
	I1202 16:08:09.149093  474687 cri.go:89] found id: "586f014c53211c1af9d8288055382380c3d51998056d288238f813c46118b641"
	I1202 16:08:09.149101  474687 cri.go:89] found id: "1ac7ddf9843eebd770bec15da5164025aa9877f89ae53a56ffdd6e14a093fe56"
	I1202 16:08:09.149108  474687 cri.go:89] found id: "7cc002479c3d20848066c689b18ebdf1db75e87f1c451b1526e550789e7a63fa"
	I1202 16:08:09.149114  474687 cri.go:89] found id: "cdfe7eda529156977893291247b97065289958fe65cbac19931af954d1f7e904"
	I1202 16:08:09.149120  474687 cri.go:89] found id: "132312565fa9df9459ca2fab422a4a035d2dd56ac519dec4d9ca9c4397bc628b"
	I1202 16:08:09.149123  474687 cri.go:89] found id: ""
	I1202 16:08:09.149180  474687 ssh_runner.go:195] Run: sudo runc list -f json
	W1202 16:08:09.163029  474687 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T16:08:09Z" level=error msg="open /run/runc: no such file or directory"
	I1202 16:08:09.163100  474687 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 16:08:09.171509  474687 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 16:08:09.171533  474687 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 16:08:09.171581  474687 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 16:08:09.180968  474687 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 16:08:09.182066  474687 kubeconfig.go:125] found "pause-907557" server: "https://192.168.85.2:8443"
	I1202 16:08:09.183465  474687 kapi.go:59] client config for pause-907557: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-264555/.minikube/profiles/pause-907557/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-264555/.minikube/profiles/pause-907557/client.key", CAFile:"/home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2815480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 16:08:09.184033  474687 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1202 16:08:09.184052  474687 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1202 16:08:09.184059  474687 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1202 16:08:09.184065  474687 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1202 16:08:09.184070  474687 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1202 16:08:09.184480  474687 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 16:08:09.197588  474687 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1202 16:08:09.197630  474687 kubeadm.go:602] duration metric: took 26.090944ms to restartPrimaryControlPlane
	I1202 16:08:09.197640  474687 kubeadm.go:403] duration metric: took 79.945856ms to StartCluster
	I1202 16:08:09.197658  474687 settings.go:142] acquiring lock: {Name:mkb00b5395affa5a80ee09f21cfed53b1afcd59c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:08:09.197742  474687 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 16:08:09.198754  474687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/kubeconfig: {Name:mk809d3f43352510256b48d000241cc8ee13f80d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:08:09.199011  474687 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 16:08:09.199122  474687 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 16:08:09.199248  474687 config.go:182] Loaded profile config "pause-907557": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 16:08:09.201711  474687 out.go:179] * Enabled addons: 
	I1202 16:08:09.201720  474687 out.go:179] * Verifying Kubernetes components...
	I1202 16:08:09.202912  474687 addons.go:530] duration metric: took 3.794378ms for enable addons: enabled=[]
	I1202 16:08:09.202957  474687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 16:08:09.334964  474687 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 16:08:09.350715  474687 node_ready.go:35] waiting up to 6m0s for node "pause-907557" to be "Ready" ...
	I1202 16:08:09.361396  474687 node_ready.go:49] node "pause-907557" is "Ready"
	I1202 16:08:09.361709  474687 node_ready.go:38] duration metric: took 10.951637ms for node "pause-907557" to be "Ready" ...
	I1202 16:08:09.361774  474687 api_server.go:52] waiting for apiserver process to appear ...
	I1202 16:08:09.361860  474687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 16:08:09.379150  474687 api_server.go:72] duration metric: took 180.016104ms to wait for apiserver process to appear ...
	I1202 16:08:09.379240  474687 api_server.go:88] waiting for apiserver healthz status ...
	I1202 16:08:09.379280  474687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1202 16:08:09.387363  474687 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1202 16:08:09.388587  474687 api_server.go:141] control plane version: v1.34.2
	I1202 16:08:09.388619  474687 api_server.go:131] duration metric: took 9.359692ms to wait for apiserver health ...
	I1202 16:08:09.388630  474687 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 16:08:09.391723  474687 system_pods.go:59] 7 kube-system pods found
	I1202 16:08:09.391753  474687 system_pods.go:61] "coredns-66bc5c9577-ckjzv" [41952b1f-3ef9-414d-99f6-b4d638903867] Running
	I1202 16:08:09.391760  474687 system_pods.go:61] "etcd-pause-907557" [321b3b9b-6fd8-4e31-affc-aa795a64994b] Running
	I1202 16:08:09.391764  474687 system_pods.go:61] "kindnet-svk5r" [6a32f68e-4724-4380-8045-ca504c4294c9] Running
	I1202 16:08:09.391769  474687 system_pods.go:61] "kube-apiserver-pause-907557" [fb99da0c-34e5-4b60-bdcb-5211eb9bf260] Running
	I1202 16:08:09.391774  474687 system_pods.go:61] "kube-controller-manager-pause-907557" [ae7b492e-cf03-466c-ab30-2797fdbc1202] Running
	I1202 16:08:09.391783  474687 system_pods.go:61] "kube-proxy-6wbvh" [402e8b88-66d2-4e4f-b0e5-693b8e9ee4b7] Running
	I1202 16:08:09.391795  474687 system_pods.go:61] "kube-scheduler-pause-907557" [49ed68b4-af67-402e-8473-87079a43e9b0] Running
	I1202 16:08:09.391802  474687 system_pods.go:74] duration metric: took 3.165489ms to wait for pod list to return data ...
	I1202 16:08:09.391817  474687 default_sa.go:34] waiting for default service account to be created ...
	I1202 16:08:09.393791  474687 default_sa.go:45] found service account: "default"
	I1202 16:08:09.393815  474687 default_sa.go:55] duration metric: took 1.989216ms for default service account to be created ...
	I1202 16:08:09.393825  474687 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 16:08:09.396312  474687 system_pods.go:86] 7 kube-system pods found
	I1202 16:08:09.396335  474687 system_pods.go:89] "coredns-66bc5c9577-ckjzv" [41952b1f-3ef9-414d-99f6-b4d638903867] Running
	I1202 16:08:09.396341  474687 system_pods.go:89] "etcd-pause-907557" [321b3b9b-6fd8-4e31-affc-aa795a64994b] Running
	I1202 16:08:09.396344  474687 system_pods.go:89] "kindnet-svk5r" [6a32f68e-4724-4380-8045-ca504c4294c9] Running
	I1202 16:08:09.396348  474687 system_pods.go:89] "kube-apiserver-pause-907557" [fb99da0c-34e5-4b60-bdcb-5211eb9bf260] Running
	I1202 16:08:09.396351  474687 system_pods.go:89] "kube-controller-manager-pause-907557" [ae7b492e-cf03-466c-ab30-2797fdbc1202] Running
	I1202 16:08:09.396355  474687 system_pods.go:89] "kube-proxy-6wbvh" [402e8b88-66d2-4e4f-b0e5-693b8e9ee4b7] Running
	I1202 16:08:09.396358  474687 system_pods.go:89] "kube-scheduler-pause-907557" [49ed68b4-af67-402e-8473-87079a43e9b0] Running
	I1202 16:08:09.396363  474687 system_pods.go:126] duration metric: took 2.53312ms to wait for k8s-apps to be running ...
	I1202 16:08:09.396369  474687 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 16:08:09.396413  474687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 16:08:09.411891  474687 system_svc.go:56] duration metric: took 15.508976ms WaitForService to wait for kubelet
	I1202 16:08:09.411931  474687 kubeadm.go:587] duration metric: took 212.882843ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 16:08:09.411961  474687 node_conditions.go:102] verifying NodePressure condition ...
	I1202 16:08:09.415055  474687 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 16:08:09.415086  474687 node_conditions.go:123] node cpu capacity is 8
	I1202 16:08:09.415113  474687 node_conditions.go:105] duration metric: took 3.144417ms to run NodePressure ...
	I1202 16:08:09.415131  474687 start.go:242] waiting for startup goroutines ...
	I1202 16:08:09.415142  474687 start.go:247] waiting for cluster config update ...
	I1202 16:08:09.415156  474687 start.go:256] writing updated cluster config ...
	I1202 16:08:09.415536  474687 ssh_runner.go:195] Run: rm -f paused
	I1202 16:08:09.419407  474687 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 16:08:09.420022  474687 kapi.go:59] client config for pause-907557: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-264555/.minikube/profiles/pause-907557/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-264555/.minikube/profiles/pause-907557/client.key", CAFile:"/home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2815480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 16:08:09.422854  474687 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ckjzv" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:08:09.426921  474687 pod_ready.go:94] pod "coredns-66bc5c9577-ckjzv" is "Ready"
	I1202 16:08:09.426955  474687 pod_ready.go:86] duration metric: took 4.077844ms for pod "coredns-66bc5c9577-ckjzv" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:08:09.429146  474687 pod_ready.go:83] waiting for pod "etcd-pause-907557" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:08:09.432844  474687 pod_ready.go:94] pod "etcd-pause-907557" is "Ready"
	I1202 16:08:09.432867  474687 pod_ready.go:86] duration metric: took 3.697806ms for pod "etcd-pause-907557" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:08:09.434754  474687 pod_ready.go:83] waiting for pod "kube-apiserver-pause-907557" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:08:09.438207  474687 pod_ready.go:94] pod "kube-apiserver-pause-907557" is "Ready"
	I1202 16:08:09.438229  474687 pod_ready.go:86] duration metric: took 3.451466ms for pod "kube-apiserver-pause-907557" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:08:09.440160  474687 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-907557" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:08:09.064399  472164 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/client.crt ...
	I1202 16:08:09.064432  472164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/client.crt: {Name:mke2d670641a9d4bc809de9f6a3fdd72fd1842f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:08:09.064652  472164 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/client.key ...
	I1202 16:08:09.064666  472164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/client.key: {Name:mkabe585a0e7e4028b3beefc9e9bc1c4b31bc7af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:08:09.064764  472164 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/apiserver.key.f7728d12
	I1202 16:08:09.064776  472164 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/apiserver.crt.f7728d12 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1202 16:08:09.333221  472164 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/apiserver.crt.f7728d12 ...
	I1202 16:08:09.333240  472164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/apiserver.crt.f7728d12: {Name:mk47f5ee4254c7b1fe9ef36b30cab9d3b7a75ac6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:08:09.333460  472164 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/apiserver.key.f7728d12 ...
	I1202 16:08:09.333480  472164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/apiserver.key.f7728d12: {Name:mkd67552487d29c5b368464c7f77c283d785a645 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:08:09.333606  472164 certs.go:381] copying /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/apiserver.crt.f7728d12 -> /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/apiserver.crt
	I1202 16:08:09.333733  472164 certs.go:385] copying /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/apiserver.key.f7728d12 -> /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/apiserver.key
	I1202 16:08:09.333825  472164 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/proxy-client.key
	I1202 16:08:09.333840  472164 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/proxy-client.crt with IP's: []
	I1202 16:08:09.432175  472164 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/proxy-client.crt ...
	I1202 16:08:09.432207  472164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/proxy-client.crt: {Name:mk3ba9ddcf274888a21306742942b9f32fae4cc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:08:09.432397  472164 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/proxy-client.key ...
	I1202 16:08:09.432408  472164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/proxy-client.key: {Name:mke7b071c2844210a0f644e5ce0b8222208bdae6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:08:09.432672  472164 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099.pem (1338 bytes)
	W1202 16:08:09.432724  472164 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099_empty.pem, impossibly tiny 0 bytes
	I1202 16:08:09.432733  472164 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem (1679 bytes)
	I1202 16:08:09.432758  472164 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem (1082 bytes)
	I1202 16:08:09.432779  472164 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem (1123 bytes)
	I1202 16:08:09.432797  472164 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem (1675 bytes)
	I1202 16:08:09.432831  472164 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem (1708 bytes)
	I1202 16:08:09.433458  472164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 16:08:09.461489  472164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 16:08:09.489607  472164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 16:08:09.515584  472164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 16:08:09.541143  472164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1202 16:08:09.567627  472164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 16:08:09.593852  472164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 16:08:09.620453  472164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/missing-upgrade-881462/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 16:08:09.647590  472164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099.pem --> /usr/share/ca-certificates/268099.pem (1338 bytes)
	I1202 16:08:09.677139  472164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem --> /usr/share/ca-certificates/2680992.pem (1708 bytes)
	I1202 16:08:09.703481  472164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 16:08:09.729671  472164 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 16:08:09.749727  472164 ssh_runner.go:195] Run: openssl version
	I1202 16:08:09.756557  472164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 16:08:09.767855  472164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:08:09.772035  472164 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 15:16 /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:08:09.772085  472164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:08:09.780186  472164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 16:08:09.793296  472164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/268099.pem && ln -fs /usr/share/ca-certificates/268099.pem /etc/ssl/certs/268099.pem"
	I1202 16:08:09.804723  472164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/268099.pem
	I1202 16:08:09.809249  472164 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 15:33 /usr/share/ca-certificates/268099.pem
	I1202 16:08:09.809309  472164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/268099.pem
	I1202 16:08:09.817447  472164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/268099.pem /etc/ssl/certs/51391683.0"
	I1202 16:08:09.830177  472164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2680992.pem && ln -fs /usr/share/ca-certificates/2680992.pem /etc/ssl/certs/2680992.pem"
	I1202 16:08:09.843761  472164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2680992.pem
	I1202 16:08:09.848051  472164 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 15:33 /usr/share/ca-certificates/2680992.pem
	I1202 16:08:09.848117  472164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2680992.pem
	I1202 16:08:09.856124  472164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2680992.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 16:08:09.870676  472164 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 16:08:09.874962  472164 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 16:08:09.875019  472164 kubeadm.go:392] StartCluster: {Name:missing-upgrade-881462 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-881462 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1202 16:08:09.875115  472164 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 16:08:09.875169  472164 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 16:08:09.918559  472164 cri.go:89] found id: ""
	I1202 16:08:09.918624  472164 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 16:08:09.933321  472164 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 16:08:09.944227  472164 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1202 16:08:09.944287  472164 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 16:08:09.954972  472164 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 16:08:09.954985  472164 kubeadm.go:157] found existing configuration files:
	
	I1202 16:08:09.955033  472164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 16:08:09.967009  472164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 16:08:09.967052  472164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 16:08:09.977493  472164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 16:08:09.988047  472164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 16:08:09.988103  472164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 16:08:09.997753  472164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 16:08:10.008366  472164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 16:08:10.008443  472164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 16:08:10.018324  472164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 16:08:10.029518  472164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 16:08:10.029574  472164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 16:08:10.039576  472164 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 16:08:10.085990  472164 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I1202 16:08:10.086084  472164 kubeadm.go:310] [preflight] Running pre-flight checks
	I1202 16:08:10.107476  472164 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1202 16:08:10.107590  472164 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1044-gcp
	I1202 16:08:10.107635  472164 kubeadm.go:310] OS: Linux
	I1202 16:08:10.107775  472164 kubeadm.go:310] CGROUPS_CPU: enabled
	I1202 16:08:10.107838  472164 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1202 16:08:10.107904  472164 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1202 16:08:10.108013  472164 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1202 16:08:10.108087  472164 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1202 16:08:10.108163  472164 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1202 16:08:10.108237  472164 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1202 16:08:10.108304  472164 kubeadm.go:310] CGROUPS_IO: enabled
	I1202 16:08:10.177067  472164 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 16:08:10.177214  472164 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 16:08:10.177364  472164 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 16:08:10.186403  472164 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 16:08:07.012064  473081 cli_runner.go:164] Run: docker network inspect stopped-upgrade-937293 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 16:08:07.032127  473081 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1202 16:08:07.036635  473081 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 16:08:07.050647  473081 kubeadm.go:884] updating cluster {Name:stopped-upgrade-937293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:stopped-upgrade-937293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVM
netClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 16:08:07.050776  473081 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1202 16:08:07.050832  473081 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 16:08:07.107196  473081 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 16:08:07.107226  473081 crio.go:433] Images already preloaded, skipping extraction
	I1202 16:08:07.107292  473081 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 16:08:07.146500  473081 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 16:08:07.146526  473081 cache_images.go:86] Images are preloaded, skipping loading
	I1202 16:08:07.146536  473081 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.32.0 crio true true} ...
	I1202 16:08:07.146663  473081 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=stopped-upgrade-937293 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:stopped-upgrade-937293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 16:08:07.146792  473081 ssh_runner.go:195] Run: crio config
	I1202 16:08:07.195524  473081 cni.go:84] Creating CNI manager for ""
	I1202 16:08:07.195546  473081 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 16:08:07.195567  473081 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 16:08:07.195598  473081 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-937293 NodeName:stopped-upgrade-937293 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 16:08:07.195753  473081 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "stopped-upgrade-937293"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 16:08:07.195826  473081 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1202 16:08:07.206090  473081 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 16:08:07.206165  473081 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 16:08:07.215812  473081 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1202 16:08:07.240879  473081 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 16:08:07.259781  473081 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1202 16:08:07.280525  473081 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1202 16:08:07.284809  473081 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 16:08:07.297256  473081 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 16:08:07.374154  473081 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 16:08:07.394335  473081 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/stopped-upgrade-937293 for IP: 192.168.103.2
	I1202 16:08:07.394355  473081 certs.go:195] generating shared ca certs ...
	I1202 16:08:07.394377  473081 certs.go:227] acquiring lock for ca certs: {Name:mk039ff27816ff98157f54038cc23b17e408fc34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:08:07.394641  473081 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-264555/.minikube/ca.key
	I1202 16:08:07.394702  473081 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.key
	I1202 16:08:07.394717  473081 certs.go:257] generating profile certs ...
	I1202 16:08:07.394882  473081 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/stopped-upgrade-937293/client.key
	I1202 16:08:07.394976  473081 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/stopped-upgrade-937293/apiserver.key.083656e0
	I1202 16:08:07.395030  473081 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/stopped-upgrade-937293/proxy-client.key
	I1202 16:08:07.395175  473081 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099.pem (1338 bytes)
	W1202 16:08:07.395220  473081 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099_empty.pem, impossibly tiny 0 bytes
	I1202 16:08:07.395233  473081 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem (1679 bytes)
	I1202 16:08:07.395269  473081 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem (1082 bytes)
	I1202 16:08:07.395305  473081 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem (1123 bytes)
	I1202 16:08:07.395339  473081 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem (1675 bytes)
	I1202 16:08:07.395399  473081 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem (1708 bytes)
	I1202 16:08:07.396168  473081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 16:08:07.426258  473081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 16:08:07.463510  473081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 16:08:07.504377  473081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 16:08:07.532886  473081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/stopped-upgrade-937293/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1202 16:08:07.565526  473081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/stopped-upgrade-937293/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 16:08:07.593591  473081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/stopped-upgrade-937293/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 16:08:07.623126  473081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/stopped-upgrade-937293/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 16:08:07.659804  473081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 16:08:07.691915  473081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099.pem --> /usr/share/ca-certificates/268099.pem (1338 bytes)
	I1202 16:08:07.717909  473081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem --> /usr/share/ca-certificates/2680992.pem (1708 bytes)
	I1202 16:08:07.749606  473081 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 16:08:07.771662  473081 ssh_runner.go:195] Run: openssl version
	I1202 16:08:07.777784  473081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/268099.pem && ln -fs /usr/share/ca-certificates/268099.pem /etc/ssl/certs/268099.pem"
	I1202 16:08:07.789885  473081 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/268099.pem
	I1202 16:08:07.794230  473081 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 15:33 /usr/share/ca-certificates/268099.pem
	I1202 16:08:07.794286  473081 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/268099.pem
	I1202 16:08:07.802324  473081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/268099.pem /etc/ssl/certs/51391683.0"
	I1202 16:08:07.813053  473081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2680992.pem && ln -fs /usr/share/ca-certificates/2680992.pem /etc/ssl/certs/2680992.pem"
	I1202 16:08:07.823680  473081 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2680992.pem
	I1202 16:08:07.828127  473081 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 15:33 /usr/share/ca-certificates/2680992.pem
	I1202 16:08:07.828204  473081 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2680992.pem
	I1202 16:08:07.836527  473081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2680992.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 16:08:07.848505  473081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 16:08:07.860247  473081 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:08:07.863893  473081 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 15:16 /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:08:07.863955  473081 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:08:07.871317  473081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 16:08:07.882120  473081 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 16:08:07.886271  473081 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 16:08:07.894832  473081 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 16:08:07.902829  473081 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 16:08:07.911331  473081 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 16:08:07.920231  473081 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 16:08:07.929163  473081 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 16:08:07.936643  473081 kubeadm.go:401] StartCluster: {Name:stopped-upgrade-937293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:stopped-upgrade-937293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 16:08:07.936753  473081 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 16:08:07.936814  473081 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 16:08:07.984920  473081 cri.go:89] found id: ""
	I1202 16:08:07.984996  473081 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 16:08:07.997625  473081 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 16:08:07.997655  473081 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 16:08:07.997713  473081 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 16:08:08.011908  473081 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 16:08:08.012809  473081 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-937293" does not appear in /home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 16:08:08.013316  473081 kubeconfig.go:62] /home/jenkins/minikube-integration/22021-264555/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-937293" cluster setting kubeconfig missing "stopped-upgrade-937293" context setting]
	I1202 16:08:08.014051  473081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/kubeconfig: {Name:mk809d3f43352510256b48d000241cc8ee13f80d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:08:08.015017  473081 kapi.go:59] client config for stopped-upgrade-937293: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-264555/.minikube/profiles/stopped-upgrade-937293/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-264555/.minikube/profiles/stopped-upgrade-937293/client.key", CAFile:"/home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2815480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 16:08:08.015590  473081 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1202 16:08:08.015623  473081 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1202 16:08:08.015631  473081 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1202 16:08:08.015638  473081 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1202 16:08:08.015645  473081 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1202 16:08:08.016085  473081 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 16:08:08.030122  473081 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-02 16:07:47.360949203 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-02 16:08:07.276478124 +0000
	@@ -41,9 +41,6 @@
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      - name: "proxy-refresh-interval"
	-        value: "70000"
	 kubernetesVersion: v1.32.0
	 networking:
	   dnsDomain: cluster.local
	
	-- /stdout --
	I1202 16:08:08.030146  473081 kubeadm.go:1161] stopping kube-system containers ...
	I1202 16:08:08.030164  473081 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1202 16:08:08.030228  473081 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 16:08:08.098762  473081 cri.go:89] found id: ""
	I1202 16:08:08.098852  473081 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1202 16:08:08.128146  473081 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 16:08:08.138778  473081 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5647 Dec  2 16:07 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Dec  2 16:07 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Dec  2 16:07 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5605 Dec  2 16:07 /etc/kubernetes/scheduler.conf
	
	I1202 16:08:08.138846  473081 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 16:08:08.151756  473081 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 16:08:08.162187  473081 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 16:08:08.172103  473081 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 16:08:08.172169  473081 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 16:08:08.183811  473081 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 16:08:08.195655  473081 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 16:08:08.195727  473081 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 16:08:08.206765  473081 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 16:08:08.217387  473081 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 16:08:08.269097  473081 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 16:08:09.096332  473081 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1202 16:08:09.284164  473081 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 16:08:09.350270  473081 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1202 16:08:09.415031  473081 api_server.go:52] waiting for apiserver process to appear ...
	I1202 16:08:09.415108  473081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 16:08:09.915653  473081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 16:08:09.932070  473081 api_server.go:72] duration metric: took 517.042838ms to wait for apiserver process to appear ...
	I1202 16:08:09.932112  473081 api_server.go:88] waiting for apiserver healthz status ...
	I1202 16:08:09.932143  473081 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1202 16:08:09.932577  473081 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1202 16:08:09.822995  474687 pod_ready.go:94] pod "kube-controller-manager-pause-907557" is "Ready"
	I1202 16:08:09.823028  474687 pod_ready.go:86] duration metric: took 382.842274ms for pod "kube-controller-manager-pause-907557" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:08:10.024162  474687 pod_ready.go:83] waiting for pod "kube-proxy-6wbvh" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:08:10.423292  474687 pod_ready.go:94] pod "kube-proxy-6wbvh" is "Ready"
	I1202 16:08:10.423327  474687 pod_ready.go:86] duration metric: took 399.132522ms for pod "kube-proxy-6wbvh" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:08:10.623539  474687 pod_ready.go:83] waiting for pod "kube-scheduler-pause-907557" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:08:11.024109  474687 pod_ready.go:94] pod "kube-scheduler-pause-907557" is "Ready"
	I1202 16:08:11.024144  474687 pod_ready.go:86] duration metric: took 400.575047ms for pod "kube-scheduler-pause-907557" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:08:11.024160  474687 pod_ready.go:40] duration metric: took 1.604689785s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 16:08:11.071445  474687 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1202 16:08:11.073469  474687 out.go:179] * Done! kubectl is now configured to use "pause-907557" cluster and "default" namespace by default
	I1202 16:08:10.189736  472164 out.go:235]   - Generating certificates and keys ...
	I1202 16:08:10.189853  472164 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1202 16:08:10.189932  472164 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1202 16:08:10.423463  472164 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1202 16:08:10.729085  472164 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1202 16:08:10.883225  472164 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1202 16:08:10.990299  472164 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1202 16:08:11.196234  472164 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1202 16:08:11.196414  472164 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost missing-upgrade-881462] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1202 16:08:11.645956  472164 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1202 16:08:11.646118  472164 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost missing-upgrade-881462] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1202 16:08:11.882746  472164 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1202 16:08:12.066274  472164 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1202 16:08:12.141907  472164 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1202 16:08:12.141981  472164 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 16:08:12.405575  472164 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 16:08:12.460147  472164 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 16:08:12.649175  472164 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 16:08:12.850735  472164 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 16:08:13.177114  472164 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 16:08:13.177589  472164 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 16:08:13.184016  472164 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 16:08:13.185893  472164 out.go:235]   - Booting up control plane ...
	I1202 16:08:13.186022  472164 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 16:08:13.186137  472164 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 16:08:13.186607  472164 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 16:08:13.196530  472164 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 16:08:13.201991  472164 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 16:08:13.202083  472164 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1202 16:08:13.296894  472164 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 16:08:13.297009  472164 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 16:08:13.798664  472164 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.881676ms
	I1202 16:08:13.798815  472164 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1202 16:08:10.432254  473081 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	
	
	==> CRI-O <==
	Dec 02 16:08:07 pause-907557 crio[2204]: time="2025-12-02T16:08:07.838793512Z" level=info msg="RDT not available in the host system"
	Dec 02 16:08:07 pause-907557 crio[2204]: time="2025-12-02T16:08:07.838810675Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 02 16:08:07 pause-907557 crio[2204]: time="2025-12-02T16:08:07.839905355Z" level=info msg="Conmon does support the --sync option"
	Dec 02 16:08:07 pause-907557 crio[2204]: time="2025-12-02T16:08:07.839928024Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 02 16:08:07 pause-907557 crio[2204]: time="2025-12-02T16:08:07.839943603Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 02 16:08:07 pause-907557 crio[2204]: time="2025-12-02T16:08:07.840879476Z" level=info msg="Conmon does support the --sync option"
	Dec 02 16:08:07 pause-907557 crio[2204]: time="2025-12-02T16:08:07.840911047Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 02 16:08:07 pause-907557 crio[2204]: time="2025-12-02T16:08:07.845712296Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 02 16:08:07 pause-907557 crio[2204]: time="2025-12-02T16:08:07.84574664Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 02 16:08:07 pause-907557 crio[2204]: time="2025-12-02T16:08:07.846247229Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 02 16:08:07 pause-907557 crio[2204]: time="2025-12-02T16:08:07.846655803Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 02 16:08:07 pause-907557 crio[2204]: time="2025-12-02T16:08:07.846713204Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 02 16:08:07 pause-907557 crio[2204]: time="2025-12-02T16:08:07.936863296Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-ckjzv Namespace:kube-system ID:06a3d7f4b28636a25a1eb656a0ab0e933cbc9ee70416d384116e714d7bd2795c UID:41952b1f-3ef9-414d-99f6-b4d638903867 NetNS:/var/run/netns/0a7ca600-bea9-4791-a9f3-75ac408ef58e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00060c228}] Aliases:map[]}"
	Dec 02 16:08:07 pause-907557 crio[2204]: time="2025-12-02T16:08:07.937130667Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-ckjzv for CNI network kindnet (type=ptp)"
	Dec 02 16:08:07 pause-907557 crio[2204]: time="2025-12-02T16:08:07.937657613Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 02 16:08:07 pause-907557 crio[2204]: time="2025-12-02T16:08:07.937692287Z" level=info msg="Starting seccomp notifier watcher"
	Dec 02 16:08:07 pause-907557 crio[2204]: time="2025-12-02T16:08:07.93775267Z" level=info msg="Create NRI interface"
	Dec 02 16:08:07 pause-907557 crio[2204]: time="2025-12-02T16:08:07.937908743Z" level=info msg="built-in NRI default validator is disabled"
	Dec 02 16:08:07 pause-907557 crio[2204]: time="2025-12-02T16:08:07.937929019Z" level=info msg="runtime interface created"
	Dec 02 16:08:07 pause-907557 crio[2204]: time="2025-12-02T16:08:07.93794427Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 02 16:08:07 pause-907557 crio[2204]: time="2025-12-02T16:08:07.93795277Z" level=info msg="runtime interface starting up..."
	Dec 02 16:08:07 pause-907557 crio[2204]: time="2025-12-02T16:08:07.937960535Z" level=info msg="starting plugins..."
	Dec 02 16:08:07 pause-907557 crio[2204]: time="2025-12-02T16:08:07.937977079Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 02 16:08:07 pause-907557 crio[2204]: time="2025-12-02T16:08:07.93834808Z" level=info msg="No systemd watchdog enabled"
	Dec 02 16:08:07 pause-907557 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	b2836aceeb880       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   14 seconds ago      Running             coredns                   0                   06a3d7f4b2863       coredns-66bc5c9577-ckjzv               kube-system
	34828dad597db       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   26 seconds ago      Running             kindnet-cni               0                   9c7200f739ca6       kindnet-svk5r                          kube-system
	586f014c53211       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   26 seconds ago      Running             kube-proxy                0                   430408110ff33       kube-proxy-6wbvh                       kube-system
	1ac7ddf9843ee       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   39 seconds ago      Running             kube-apiserver            0                   06a0703c6431c       kube-apiserver-pause-907557            kube-system
	7cc002479c3d2       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   39 seconds ago      Running             kube-scheduler            0                   d92e374692b74       kube-scheduler-pause-907557            kube-system
	cdfe7eda52915       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   39 seconds ago      Running             kube-controller-manager   0                   1e816873b5622       kube-controller-manager-pause-907557   kube-system
	132312565fa9d       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   39 seconds ago      Running             etcd                      0                   1ae45c58e8234       etcd-pause-907557                      kube-system
	
	
	==> coredns [b2836aceeb8807e0993320e05f6aa6c4be7c30aaaa190092f8e98f5f7dd646ec] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56667 - 20312 "HINFO IN 8430757461962317108.6158922499630476662. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019053882s
	
	
	==> describe nodes <==
	Name:               pause-907557
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-907557
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=pause-907557
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T16_07_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 16:07:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-907557
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 16:08:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 16:08:00 +0000   Tue, 02 Dec 2025 16:07:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 16:08:00 +0000   Tue, 02 Dec 2025 16:07:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 16:08:00 +0000   Tue, 02 Dec 2025 16:07:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 16:08:00 +0000   Tue, 02 Dec 2025 16:08:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-907557
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                39f3c90c-c1ff-4f22-b289-732142ace055
	  Boot ID:                    e00bac56-b076-4861-bc22-5d3b11269f73
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-ckjzv                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-pause-907557                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-svk5r                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-pause-907557             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-pause-907557    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-6wbvh                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-pause-907557             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 33s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  33s   kubelet          Node pause-907557 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s   kubelet          Node pause-907557 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s   kubelet          Node pause-907557 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s   node-controller  Node pause-907557 event: Registered Node pause-907557 in Controller
	  Normal  NodeReady                16s   kubelet          Node pause-907557 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 9b c8 59 55 e7 08 06
	[  +4.389247] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 07 ad 09 99 ea 08 06
	[Dec 2 15:17] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[  +1.025203] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[  +1.023929] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[Dec 2 15:18] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[  +1.023866] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[  +1.023913] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[  +2.047808] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[  +4.031697] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[  +8.511329] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[ +16.382712] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[Dec 2 15:19] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	
	
	==> etcd [132312565fa9df9459ca2fab422a4a035d2dd56ac519dec4d9ca9c4397bc628b] <==
	{"level":"warn","ts":"2025-12-02T16:07:41.445874Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-02T16:07:41.126020Z","time spent":"319.845456ms","remote":"127.0.0.1:39114","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":0,"response size":27,"request content":"key:\"/registry/clusterroles/system:aggregate-to-edit\" limit:1 "}
	{"level":"info","ts":"2025-12-02T16:07:41.445898Z","caller":"traceutil/trace.go:172","msg":"trace[1198633256] transaction","detail":"{read_only:false; response_revision:44; number_of_response:1; }","duration":"321.975483ms","start":"2025-12-02T16:07:41.123915Z","end":"2025-12-02T16:07:41.445891Z","steps":["trace[1198633256] 'process raft request'  (duration: 114.352245ms)","trace[1198633256] 'compare'  (duration: 207.456527ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T16:07:41.445926Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-02T16:07:41.123901Z","time spent":"322.011522ms","remote":"127.0.0.1:39264","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":705,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/prioritylevelconfigurations/global-default\" mod_revision:0 > success:<request_put:<key:\"/registry/prioritylevelconfigurations/global-default\" value_size:645 >> failure:<>"}
	{"level":"info","ts":"2025-12-02T16:07:41.573794Z","caller":"traceutil/trace.go:172","msg":"trace[306217900] linearizableReadLoop","detail":"{readStateIndex:48; appliedIndex:48; }","duration":"124.282405ms","start":"2025-12-02T16:07:41.449487Z","end":"2025-12-02T16:07:41.573770Z","steps":["trace[306217900] 'read index received'  (duration: 124.27533ms)","trace[306217900] 'applied index is now lower than readState.Index'  (duration: 6.058µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T16:07:41.824192Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"374.681379ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregate-to-view\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-12-02T16:07:41.824263Z","caller":"traceutil/trace.go:172","msg":"trace[1003424393] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-view; range_end:; response_count:0; response_revision:44; }","duration":"374.766397ms","start":"2025-12-02T16:07:41.449483Z","end":"2025-12-02T16:07:41.824250Z","steps":["trace[1003424393] 'agreement among raft nodes before linearized reading'  (duration: 124.36671ms)","trace[1003424393] 'range keys from in-memory index tree'  (duration: 250.284002ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T16:07:41.824337Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-02T16:07:41.449471Z","time spent":"374.816886ms","remote":"127.0.0.1:39114","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":0,"response size":27,"request content":"key:\"/registry/clusterroles/system:aggregate-to-view\" limit:1 "}
	{"level":"warn","ts":"2025-12-02T16:07:41.824322Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"250.455934ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597456650294978 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/flowschemas/system-nodes\" mod_revision:0 > success:<request_put:<key:\"/registry/flowschemas/system-nodes\" value_size:595 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-12-02T16:07:41.824444Z","caller":"traceutil/trace.go:172","msg":"trace[1766645835] transaction","detail":"{read_only:false; response_revision:45; number_of_response:1; }","duration":"375.709417ms","start":"2025-12-02T16:07:41.448706Z","end":"2025-12-02T16:07:41.824416Z","steps":["trace[1766645835] 'process raft request'  (duration: 125.11328ms)","trace[1766645835] 'compare'  (duration: 250.350123ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T16:07:41.824496Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-02T16:07:41.448689Z","time spent":"375.781217ms","remote":"127.0.0.1:39252","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":637,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/flowschemas/system-nodes\" mod_revision:0 > success:<request_put:<key:\"/registry/flowschemas/system-nodes\" value_size:595 >> failure:<>"}
	{"level":"info","ts":"2025-12-02T16:07:41.949345Z","caller":"traceutil/trace.go:172","msg":"trace[88265753] linearizableReadLoop","detail":"{readStateIndex:49; appliedIndex:49; }","duration":"121.014024ms","start":"2025-12-02T16:07:41.828308Z","end":"2025-12-02T16:07:41.949322Z","steps":["trace[88265753] 'read index received'  (duration: 121.005988ms)","trace[88265753] 'applied index is now lower than readState.Index'  (duration: 6.301µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T16:07:41.950551Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"122.22124ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:discovery\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-12-02T16:07:41.950604Z","caller":"traceutil/trace.go:172","msg":"trace[77218913] range","detail":"{range_begin:/registry/clusterrolebindings/system:discovery; range_end:; response_count:0; response_revision:45; }","duration":"122.289235ms","start":"2025-12-02T16:07:41.828303Z","end":"2025-12-02T16:07:41.950593Z","steps":["trace[77218913] 'agreement among raft nodes before linearized reading'  (duration: 121.104007ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T16:07:41.951318Z","caller":"traceutil/trace.go:172","msg":"trace[1054494770] transaction","detail":"{read_only:false; response_revision:47; number_of_response:1; }","duration":"122.519287ms","start":"2025-12-02T16:07:41.828786Z","end":"2025-12-02T16:07:41.951305Z","steps":["trace[1054494770] 'process raft request'  (duration: 122.452511ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T16:07:41.951324Z","caller":"traceutil/trace.go:172","msg":"trace[1931375124] transaction","detail":"{read_only:false; response_revision:46; number_of_response:1; }","duration":"124.301632ms","start":"2025-12-02T16:07:41.827006Z","end":"2025-12-02T16:07:41.951307Z","steps":["trace[1931375124] 'process raft request'  (duration: 122.395167ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T16:07:42.054939Z","caller":"traceutil/trace.go:172","msg":"trace[1800630730] transaction","detail":"{read_only:false; response_revision:48; number_of_response:1; }","duration":"100.590416ms","start":"2025-12-02T16:07:41.954324Z","end":"2025-12-02T16:07:42.054915Z","steps":["trace[1800630730] 'process raft request'  (duration: 96.760514ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T16:07:42.055041Z","caller":"traceutil/trace.go:172","msg":"trace[637785066] transaction","detail":"{read_only:false; response_revision:49; number_of_response:1; }","duration":"100.020236ms","start":"2025-12-02T16:07:41.954971Z","end":"2025-12-02T16:07:42.054991Z","steps":["trace[637785066] 'process raft request'  (duration: 99.886118ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-02T16:08:01.845136Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"157.222977ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-02T16:08:01.845232Z","caller":"traceutil/trace.go:172","msg":"trace[1314016532] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:387; }","duration":"157.324948ms","start":"2025-12-02T16:08:01.687888Z","end":"2025-12-02T16:08:01.845213Z","steps":["trace[1314016532] 'range keys from in-memory index tree'  (duration: 157.143333ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-02T16:08:01.845156Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"178.411725ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-02T16:08:01.845410Z","caller":"traceutil/trace.go:172","msg":"trace[280292306] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:387; }","duration":"178.635845ms","start":"2025-12-02T16:08:01.666722Z","end":"2025-12-02T16:08:01.845358Z","steps":["trace[280292306] 'range keys from in-memory index tree'  (duration: 178.359962ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T16:08:01.863554Z","caller":"traceutil/trace.go:172","msg":"trace[30696123] transaction","detail":"{read_only:false; response_revision:388; number_of_response:1; }","duration":"133.299309ms","start":"2025-12-02T16:08:01.730235Z","end":"2025-12-02T16:08:01.863534Z","steps":["trace[30696123] 'process raft request'  (duration: 133.074487ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-02T16:08:02.368918Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"148.620457ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-02T16:08:02.368979Z","caller":"traceutil/trace.go:172","msg":"trace[661681382] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:390; }","duration":"148.691856ms","start":"2025-12-02T16:08:02.220274Z","end":"2025-12-02T16:08:02.368966Z","steps":["trace[661681382] 'range keys from in-memory index tree'  (duration: 148.53513ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T16:08:03.725557Z","caller":"traceutil/trace.go:172","msg":"trace[374387627] transaction","detail":"{read_only:false; response_revision:400; number_of_response:1; }","duration":"121.076042ms","start":"2025-12-02T16:08:03.604455Z","end":"2025-12-02T16:08:03.725531Z","steps":["trace[374387627] 'process raft request'  (duration: 120.889836ms)"],"step_count":1}
	
	
	==> kernel <==
	 16:08:16 up  2:50,  0 user,  load average: 4.04, 1.80, 1.31
	Linux pause-907557 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [34828dad597db079c97a036969df0740139e6fd38885ad5627968129aef7c2b3] <==
	I1202 16:07:50.263994       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 16:07:50.357878       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1202 16:07:50.358066       1 main.go:148] setting mtu 1500 for CNI 
	I1202 16:07:50.358088       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 16:07:50.358117       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T16:07:50Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 16:07:50.560051       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 16:07:50.560099       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 16:07:50.560111       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 16:07:50.560231       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1202 16:07:50.957840       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 16:07:50.957890       1 metrics.go:72] Registering metrics
	I1202 16:07:50.958047       1 controller.go:711] "Syncing nftables rules"
	I1202 16:08:00.563524       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1202 16:08:00.563624       1 main.go:301] handling current node
	I1202 16:08:10.566532       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1202 16:08:10.566563       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1ac7ddf9843eebd770bec15da5164025aa9877f89ae53a56ffdd6e14a093fe56] <==
	I1202 16:07:39.750102       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 16:07:39.750363       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	E1202 16:07:39.750633       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	E1202 16:07:39.750779       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1202 16:07:39.750960       1 controller.go:667] quota admission added evaluator for: namespaces
	I1202 16:07:39.853576       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 16:07:39.854635       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1202 16:07:40.262882       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 16:07:41.113218       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1202 16:07:41.123845       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1202 16:07:41.123870       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1202 16:07:42.647766       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 16:07:42.703824       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 16:07:42.860514       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1202 16:07:42.868695       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1202 16:07:42.870283       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 16:07:42.877222       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 16:07:43.587622       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 16:07:43.758990       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1202 16:07:43.773771       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1202 16:07:43.782804       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1202 16:07:49.296851       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 16:07:49.301367       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 16:07:49.643896       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1202 16:07:49.689739       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [cdfe7eda529156977893291247b97065289958fe65cbac19931af954d1f7e904] <==
	I1202 16:07:48.585946       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1202 16:07:48.585959       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1202 16:07:48.586174       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1202 16:07:48.586329       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1202 16:07:48.586477       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1202 16:07:48.586630       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1202 16:07:48.586714       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-907557"
	I1202 16:07:48.586761       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1202 16:07:48.587267       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1202 16:07:48.587355       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1202 16:07:48.588182       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1202 16:07:48.588259       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1202 16:07:48.588345       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1202 16:07:48.588407       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1202 16:07:48.588649       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1202 16:07:48.588688       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1202 16:07:48.589393       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1202 16:07:48.589402       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1202 16:07:48.589725       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1202 16:07:48.590665       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1202 16:07:48.594794       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 16:07:48.599527       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1202 16:07:48.605939       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1202 16:07:48.608641       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 16:08:03.727062       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [586f014c53211c1af9d8288055382380c3d51998056d288238f813c46118b641] <==
	I1202 16:07:50.127455       1 server_linux.go:53] "Using iptables proxy"
	I1202 16:07:50.189364       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 16:07:50.289787       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 16:07:50.289831       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1202 16:07:50.289962       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 16:07:50.310959       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 16:07:50.311006       1 server_linux.go:132] "Using iptables Proxier"
	I1202 16:07:50.317487       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 16:07:50.318071       1 server.go:527] "Version info" version="v1.34.2"
	I1202 16:07:50.320086       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 16:07:50.322190       1 config.go:200] "Starting service config controller"
	I1202 16:07:50.327638       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 16:07:50.322641       1 config.go:309] "Starting node config controller"
	I1202 16:07:50.327683       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 16:07:50.327689       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 16:07:50.326112       1 config.go:106] "Starting endpoint slice config controller"
	I1202 16:07:50.327699       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 16:07:50.326098       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 16:07:50.327706       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 16:07:50.428744       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 16:07:50.428785       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 16:07:50.430007       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7cc002479c3d20848066c689b18ebdf1db75e87f1c451b1526e550789e7a63fa] <==
	E1202 16:07:39.625871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1202 16:07:39.625871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1202 16:07:39.625950       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1202 16:07:39.626006       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1202 16:07:40.460590       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1202 16:07:40.485977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1202 16:07:40.486726       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1202 16:07:40.550589       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1202 16:07:40.629566       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1202 16:07:40.728461       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1202 16:07:40.742845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1202 16:07:40.765319       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1202 16:07:40.775965       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1202 16:07:40.820409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1202 16:07:40.843069       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1202 16:07:40.850504       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1202 16:07:40.976306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1202 16:07:41.122251       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1202 16:07:41.147646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1202 16:07:41.152712       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1202 16:07:41.153480       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1202 16:07:41.215170       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1202 16:07:41.227783       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1202 16:07:42.431320       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1202 16:07:43.620165       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 02 16:07:44 pause-907557 kubelet[1341]: E1202 16:07:44.667350    1341 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-907557\" already exists" pod="kube-system/kube-apiserver-pause-907557"
	Dec 02 16:07:44 pause-907557 kubelet[1341]: I1202 16:07:44.720439    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-907557" podStartSLOduration=1.7203934140000001 podStartE2EDuration="1.720393414s" podCreationTimestamp="2025-12-02 16:07:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 16:07:44.70863584 +0000 UTC m=+1.178923066" watchObservedRunningTime="2025-12-02 16:07:44.720393414 +0000 UTC m=+1.190680641"
	Dec 02 16:07:44 pause-907557 kubelet[1341]: I1202 16:07:44.737610    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-907557" podStartSLOduration=1.7375809800000002 podStartE2EDuration="1.73758098s" podCreationTimestamp="2025-12-02 16:07:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 16:07:44.72059503 +0000 UTC m=+1.190882274" watchObservedRunningTime="2025-12-02 16:07:44.73758098 +0000 UTC m=+1.207868200"
	Dec 02 16:07:44 pause-907557 kubelet[1341]: I1202 16:07:44.737820    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-907557" podStartSLOduration=1.737805699 podStartE2EDuration="1.737805699s" podCreationTimestamp="2025-12-02 16:07:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 16:07:44.737755284 +0000 UTC m=+1.208042506" watchObservedRunningTime="2025-12-02 16:07:44.737805699 +0000 UTC m=+1.208092931"
	Dec 02 16:07:44 pause-907557 kubelet[1341]: I1202 16:07:44.747709    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-907557" podStartSLOduration=1.7476860950000002 podStartE2EDuration="1.747686095s" podCreationTimestamp="2025-12-02 16:07:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 16:07:44.747660507 +0000 UTC m=+1.217947715" watchObservedRunningTime="2025-12-02 16:07:44.747686095 +0000 UTC m=+1.217973325"
	Dec 02 16:07:48 pause-907557 kubelet[1341]: I1202 16:07:48.572786    1341 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 02 16:07:48 pause-907557 kubelet[1341]: I1202 16:07:48.573572    1341 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 02 16:07:49 pause-907557 kubelet[1341]: I1202 16:07:49.754450    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c5vd\" (UniqueName: \"kubernetes.io/projected/6a32f68e-4724-4380-8045-ca504c4294c9-kube-api-access-4c5vd\") pod \"kindnet-svk5r\" (UID: \"6a32f68e-4724-4380-8045-ca504c4294c9\") " pod="kube-system/kindnet-svk5r"
	Dec 02 16:07:49 pause-907557 kubelet[1341]: I1202 16:07:49.754505    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/402e8b88-66d2-4e4f-b0e5-693b8e9ee4b7-kube-proxy\") pod \"kube-proxy-6wbvh\" (UID: \"402e8b88-66d2-4e4f-b0e5-693b8e9ee4b7\") " pod="kube-system/kube-proxy-6wbvh"
	Dec 02 16:07:49 pause-907557 kubelet[1341]: I1202 16:07:49.754539    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds54c\" (UniqueName: \"kubernetes.io/projected/402e8b88-66d2-4e4f-b0e5-693b8e9ee4b7-kube-api-access-ds54c\") pod \"kube-proxy-6wbvh\" (UID: \"402e8b88-66d2-4e4f-b0e5-693b8e9ee4b7\") " pod="kube-system/kube-proxy-6wbvh"
	Dec 02 16:07:49 pause-907557 kubelet[1341]: I1202 16:07:49.754562    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6a32f68e-4724-4380-8045-ca504c4294c9-cni-cfg\") pod \"kindnet-svk5r\" (UID: \"6a32f68e-4724-4380-8045-ca504c4294c9\") " pod="kube-system/kindnet-svk5r"
	Dec 02 16:07:49 pause-907557 kubelet[1341]: I1202 16:07:49.754657    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a32f68e-4724-4380-8045-ca504c4294c9-xtables-lock\") pod \"kindnet-svk5r\" (UID: \"6a32f68e-4724-4380-8045-ca504c4294c9\") " pod="kube-system/kindnet-svk5r"
	Dec 02 16:07:49 pause-907557 kubelet[1341]: I1202 16:07:49.754712    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/402e8b88-66d2-4e4f-b0e5-693b8e9ee4b7-xtables-lock\") pod \"kube-proxy-6wbvh\" (UID: \"402e8b88-66d2-4e4f-b0e5-693b8e9ee4b7\") " pod="kube-system/kube-proxy-6wbvh"
	Dec 02 16:07:49 pause-907557 kubelet[1341]: I1202 16:07:49.754743    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/402e8b88-66d2-4e4f-b0e5-693b8e9ee4b7-lib-modules\") pod \"kube-proxy-6wbvh\" (UID: \"402e8b88-66d2-4e4f-b0e5-693b8e9ee4b7\") " pod="kube-system/kube-proxy-6wbvh"
	Dec 02 16:07:49 pause-907557 kubelet[1341]: I1202 16:07:49.754765    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a32f68e-4724-4380-8045-ca504c4294c9-lib-modules\") pod \"kindnet-svk5r\" (UID: \"6a32f68e-4724-4380-8045-ca504c4294c9\") " pod="kube-system/kindnet-svk5r"
	Dec 02 16:07:50 pause-907557 kubelet[1341]: I1202 16:07:50.695658    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6wbvh" podStartSLOduration=1.6956353370000001 podStartE2EDuration="1.695635337s" podCreationTimestamp="2025-12-02 16:07:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 16:07:50.695507547 +0000 UTC m=+7.165794774" watchObservedRunningTime="2025-12-02 16:07:50.695635337 +0000 UTC m=+7.165922564"
	Dec 02 16:07:50 pause-907557 kubelet[1341]: I1202 16:07:50.695795    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-svk5r" podStartSLOduration=1.695783519 podStartE2EDuration="1.695783519s" podCreationTimestamp="2025-12-02 16:07:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 16:07:50.685283997 +0000 UTC m=+7.155571224" watchObservedRunningTime="2025-12-02 16:07:50.695783519 +0000 UTC m=+7.166070746"
	Dec 02 16:08:00 pause-907557 kubelet[1341]: I1202 16:08:00.987625    1341 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 02 16:08:01 pause-907557 kubelet[1341]: I1202 16:08:01.140562    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/41952b1f-3ef9-414d-99f6-b4d638903867-config-volume\") pod \"coredns-66bc5c9577-ckjzv\" (UID: \"41952b1f-3ef9-414d-99f6-b4d638903867\") " pod="kube-system/coredns-66bc5c9577-ckjzv"
	Dec 02 16:08:01 pause-907557 kubelet[1341]: I1202 16:08:01.140604    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flkb5\" (UniqueName: \"kubernetes.io/projected/41952b1f-3ef9-414d-99f6-b4d638903867-kube-api-access-flkb5\") pod \"coredns-66bc5c9577-ckjzv\" (UID: \"41952b1f-3ef9-414d-99f6-b4d638903867\") " pod="kube-system/coredns-66bc5c9577-ckjzv"
	Dec 02 16:08:02 pause-907557 kubelet[1341]: I1202 16:08:02.811291    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-ckjzv" podStartSLOduration=13.811258253 podStartE2EDuration="13.811258253s" podCreationTimestamp="2025-12-02 16:07:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 16:08:02.80881092 +0000 UTC m=+19.279098147" watchObservedRunningTime="2025-12-02 16:08:02.811258253 +0000 UTC m=+19.281545483"
	Dec 02 16:08:11 pause-907557 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 02 16:08:11 pause-907557 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 02 16:08:11 pause-907557 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 16:08:11 pause-907557 systemd[1]: kubelet.service: Consumed 1.299s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-907557 -n pause-907557
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-907557 -n pause-907557: exit status 2 (350.030037ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-907557 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-380588 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-380588 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (309.583815ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T16:16:21Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-380588 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-380588 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-380588 describe deploy/metrics-server -n kube-system: exit status 1 (91.711611ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-380588 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-380588
helpers_test.go:243: (dbg) docker inspect old-k8s-version-380588:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a0a1616e8b44e3eee10890bb03aad62d5402afaed42de003f0e4ecec52bf4ef5",
	        "Created": "2025-12-02T16:15:24.388732142Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 581175,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T16:15:24.429537099Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/a0a1616e8b44e3eee10890bb03aad62d5402afaed42de003f0e4ecec52bf4ef5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a0a1616e8b44e3eee10890bb03aad62d5402afaed42de003f0e4ecec52bf4ef5/hostname",
	        "HostsPath": "/var/lib/docker/containers/a0a1616e8b44e3eee10890bb03aad62d5402afaed42de003f0e4ecec52bf4ef5/hosts",
	        "LogPath": "/var/lib/docker/containers/a0a1616e8b44e3eee10890bb03aad62d5402afaed42de003f0e4ecec52bf4ef5/a0a1616e8b44e3eee10890bb03aad62d5402afaed42de003f0e4ecec52bf4ef5-json.log",
	        "Name": "/old-k8s-version-380588",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-380588:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-380588",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a0a1616e8b44e3eee10890bb03aad62d5402afaed42de003f0e4ecec52bf4ef5",
	                "LowerDir": "/var/lib/docker/overlay2/cc7db27ba93f361cedfb46f5902b70f222396dd2f79762e474c32c7912e9c9f1-init/diff:/var/lib/docker/overlay2/ab98578cee54140c21ba2edb7c02601b9799fbaa027f05ce4daaae66d198c082/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cc7db27ba93f361cedfb46f5902b70f222396dd2f79762e474c32c7912e9c9f1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cc7db27ba93f361cedfb46f5902b70f222396dd2f79762e474c32c7912e9c9f1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cc7db27ba93f361cedfb46f5902b70f222396dd2f79762e474c32c7912e9c9f1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-380588",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-380588/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-380588",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-380588",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-380588",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c25cc41d743c1fb3de82a04f18e28937bc308b513b1d4a9a2b6674c8a800cae9",
	            "SandboxKey": "/var/run/docker/netns/c25cc41d743c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33218"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33219"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33222"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33220"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33221"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-380588": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "12755aa6121ef84808d7e2051c86e67e4ac4ab231ddc7e94bd39dd8ca085a952",
	                    "EndpointID": "cd2995140f2d46fbbaf491d197c306595de8277588697da2743266d6c553412e",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "02:6e:ec:82:35:c3",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-380588",
	                        "a0a1616e8b44"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-380588 -n old-k8s-version-380588
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-380588 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-380588 logs -n 25: (1.201682689s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-589300 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                                    │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ ssh     │ -p bridge-589300 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                   │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ ssh     │ -p bridge-589300 sudo cat /var/lib/kubelet/config.yaml                                                                                                                   │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ ssh     │ -p bridge-589300 sudo systemctl status docker --all --full --no-pager                                                                                                    │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	│ ssh     │ -p bridge-589300 sudo systemctl cat docker --no-pager                                                                                                                    │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ ssh     │ -p bridge-589300 sudo cat /etc/docker/daemon.json                                                                                                                        │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	│ ssh     │ -p bridge-589300 sudo docker system info                                                                                                                                 │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	│ ssh     │ -p bridge-589300 sudo systemctl status cri-docker --all --full --no-pager                                                                                                │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	│ ssh     │ -p bridge-589300 sudo systemctl cat cri-docker --no-pager                                                                                                                │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ ssh     │ -p bridge-589300 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                           │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	│ ssh     │ -p bridge-589300 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                     │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ ssh     │ -p bridge-589300 sudo cri-dockerd --version                                                                                                                              │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ ssh     │ -p bridge-589300 sudo systemctl status containerd --all --full --no-pager                                                                                                │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	│ ssh     │ -p bridge-589300 sudo systemctl cat containerd --no-pager                                                                                                                │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ ssh     │ -p bridge-589300 sudo cat /lib/systemd/system/containerd.service                                                                                                         │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ ssh     │ -p bridge-589300 sudo cat /etc/containerd/config.toml                                                                                                                    │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ ssh     │ -p bridge-589300 sudo containerd config dump                                                                                                                             │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ ssh     │ -p bridge-589300 sudo systemctl status crio --all --full --no-pager                                                                                                      │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ ssh     │ -p bridge-589300 sudo systemctl cat crio --no-pager                                                                                                                      │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ ssh     │ -p bridge-589300 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                            │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ ssh     │ -p bridge-589300 sudo crio config                                                                                                                                        │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ delete  │ -p bridge-589300                                                                                                                                                         │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ delete  │ -p disable-driver-mounts-904481                                                                                                                                          │ disable-driver-mounts-904481 │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ start   │ -p default-k8s-diff-port-806420 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2 │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-380588 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                             │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 16:16:14
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 16:16:14.209718  601673 out.go:360] Setting OutFile to fd 1 ...
	I1202 16:16:14.209821  601673 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:16:14.209830  601673 out.go:374] Setting ErrFile to fd 2...
	I1202 16:16:14.209834  601673 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:16:14.210072  601673 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 16:16:14.210617  601673 out.go:368] Setting JSON to false
	I1202 16:16:14.211833  601673 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":10715,"bootTime":1764681459,"procs":316,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 16:16:14.211891  601673 start.go:143] virtualization: kvm guest
	I1202 16:16:14.214072  601673 out.go:179] * [default-k8s-diff-port-806420] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 16:16:14.215581  601673 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 16:16:14.215590  601673 notify.go:221] Checking for updates...
	I1202 16:16:14.218253  601673 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 16:16:14.219997  601673 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 16:16:14.221278  601673 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-264555/.minikube
	I1202 16:16:14.222622  601673 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 16:16:14.223868  601673 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 16:16:14.225645  601673 config.go:182] Loaded profile config "embed-certs-046271": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 16:16:14.225795  601673 config.go:182] Loaded profile config "no-preload-534842": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 16:16:14.225922  601673 config.go:182] Loaded profile config "old-k8s-version-380588": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1202 16:16:14.226040  601673 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 16:16:14.251945  601673 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 16:16:14.252053  601673 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 16:16:14.323181  601673 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-02 16:16:14.311584435 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 16:16:14.323275  601673 docker.go:319] overlay module found
	I1202 16:16:14.325245  601673 out.go:179] * Using the docker driver based on user configuration
	I1202 16:16:14.326499  601673 start.go:309] selected driver: docker
	I1202 16:16:14.326520  601673 start.go:927] validating driver "docker" against <nil>
	I1202 16:16:14.326535  601673 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 16:16:14.327262  601673 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 16:16:14.392849  601673 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-02 16:16:14.382511545 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 16:16:14.393101  601673 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 16:16:14.393473  601673 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 16:16:14.395493  601673 out.go:179] * Using Docker driver with root privileges
	I1202 16:16:14.396988  601673 cni.go:84] Creating CNI manager for ""
	I1202 16:16:14.397076  601673 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 16:16:14.397091  601673 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 16:16:14.397187  601673 start.go:353] cluster config:
	{Name:default-k8s-diff-port-806420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-806420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 16:16:14.398582  601673 out.go:179] * Starting "default-k8s-diff-port-806420" primary control-plane node in "default-k8s-diff-port-806420" cluster
	I1202 16:16:14.399660  601673 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 16:16:14.401383  601673 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 16:16:14.402693  601673 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 16:16:14.402731  601673 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22021-264555/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1202 16:16:14.402743  601673 cache.go:65] Caching tarball of preloaded images
	I1202 16:16:14.402785  601673 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 16:16:14.402839  601673 preload.go:238] Found /home/jenkins/minikube-integration/22021-264555/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 16:16:14.402853  601673 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 16:16:14.402957  601673 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/config.json ...
	I1202 16:16:14.402979  601673 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/config.json: {Name:mkddee2b359a6629d691bd3c15cafa759bf3a2ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:16:14.425924  601673 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 16:16:14.425951  601673 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 16:16:14.425971  601673 cache.go:243] Successfully downloaded all kic artifacts
	I1202 16:16:14.426010  601673 start.go:360] acquireMachinesLock for default-k8s-diff-port-806420: {Name:mk8a961b68c6bbf9b1910f8ae43c90e49f86c0f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:16:14.426130  601673 start.go:364] duration metric: took 97.185µs to acquireMachinesLock for "default-k8s-diff-port-806420"
	I1202 16:16:14.426162  601673 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-806420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-806420 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 16:16:14.426257  601673 start.go:125] createHost starting for "" (driver="docker")
	I1202 16:16:12.110881  595674 out.go:252]   - Generating certificates and keys ...
	I1202 16:16:12.111007  595674 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 16:16:12.111093  595674 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 16:16:12.165735  595674 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1202 16:16:12.445729  595674 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1202 16:16:12.855159  595674 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1202 16:16:13.031505  595674 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1202 16:16:13.203185  595674 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1202 16:16:13.203385  595674 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-046271 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1202 16:16:13.732495  595674 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1202 16:16:13.732716  595674 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-046271 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1202 16:16:13.976699  595674 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1202 16:16:14.213536  595674 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1202 16:16:14.570978  595674 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1202 16:16:14.571154  595674 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 16:16:14.742176  595674 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 16:16:15.663038  595674 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 16:16:15.949044  595674 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 16:16:16.421414  595674 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 16:16:16.578550  595674 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 16:16:16.579307  595674 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 16:16:16.585128  595674 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1202 16:16:13.902752  584973 node_ready.go:57] node "no-preload-534842" has "Ready":"False" status (will retry)
	W1202 16:16:16.402531  584973 node_ready.go:57] node "no-preload-534842" has "Ready":"False" status (will retry)
	I1202 16:16:16.901976  584973 node_ready.go:49] node "no-preload-534842" is "Ready"
	I1202 16:16:16.902011  584973 node_ready.go:38] duration metric: took 14.503217219s for node "no-preload-534842" to be "Ready" ...
	I1202 16:16:16.902029  584973 api_server.go:52] waiting for apiserver process to appear ...
	I1202 16:16:16.902086  584973 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 16:16:16.914832  584973 api_server.go:72] duration metric: took 14.940636388s to wait for apiserver process to appear ...
	I1202 16:16:16.914860  584973 api_server.go:88] waiting for apiserver healthz status ...
	I1202 16:16:16.914879  584973 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1202 16:16:16.919290  584973 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1202 16:16:16.920297  584973 api_server.go:141] control plane version: v1.35.0-beta.0
	I1202 16:16:16.920326  584973 api_server.go:131] duration metric: took 5.457583ms to wait for apiserver health ...
	I1202 16:16:16.920338  584973 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 16:16:16.923886  584973 system_pods.go:59] 8 kube-system pods found
	I1202 16:16:16.923938  584973 system_pods.go:61] "coredns-7d764666f9-fxl4s" [7716bc36-76db-41a6-8acc-0025ea0b7787] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 16:16:16.923953  584973 system_pods.go:61] "etcd-no-preload-534842" [4156cb3a-c013-41ad-8c3e-4b32efbd243f] Running
	I1202 16:16:16.923965  584973 system_pods.go:61] "kindnet-fn84j" [e8f80ec9-4aff-4de8-aa5a-e262160e51d7] Running
	I1202 16:16:16.923971  584973 system_pods.go:61] "kube-apiserver-no-preload-534842" [d97ac382-11e9-48cd-8672-72fa466fd1d5] Running
	I1202 16:16:16.923977  584973 system_pods.go:61] "kube-controller-manager-no-preload-534842" [f7d15fa4-0c27-4ca5-a37b-f78a693fa541] Running
	I1202 16:16:16.923988  584973 system_pods.go:61] "kube-proxy-xqnrx" [d56d7371-0677-4746-972b-b3d24b8070f2] Running
	I1202 16:16:16.923997  584973 system_pods.go:61] "kube-scheduler-no-preload-534842" [73c10cb6-aaa7-4128-896f-16def63b750d] Running
	I1202 16:16:16.924005  584973 system_pods.go:61] "storage-provisioner" [15ec190a-3c61-47f3-87a1-c5ab08d312b1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 16:16:16.924029  584973 system_pods.go:74] duration metric: took 3.682859ms to wait for pod list to return data ...
	I1202 16:16:16.924038  584973 default_sa.go:34] waiting for default service account to be created ...
	I1202 16:16:16.926663  584973 default_sa.go:45] found service account: "default"
	I1202 16:16:16.926690  584973 default_sa.go:55] duration metric: took 2.641195ms for default service account to be created ...
	I1202 16:16:16.926698  584973 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 16:16:16.929133  584973 system_pods.go:86] 8 kube-system pods found
	I1202 16:16:16.929157  584973 system_pods.go:89] "coredns-7d764666f9-fxl4s" [7716bc36-76db-41a6-8acc-0025ea0b7787] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 16:16:16.929163  584973 system_pods.go:89] "etcd-no-preload-534842" [4156cb3a-c013-41ad-8c3e-4b32efbd243f] Running
	I1202 16:16:16.929168  584973 system_pods.go:89] "kindnet-fn84j" [e8f80ec9-4aff-4de8-aa5a-e262160e51d7] Running
	I1202 16:16:16.929172  584973 system_pods.go:89] "kube-apiserver-no-preload-534842" [d97ac382-11e9-48cd-8672-72fa466fd1d5] Running
	I1202 16:16:16.929180  584973 system_pods.go:89] "kube-controller-manager-no-preload-534842" [f7d15fa4-0c27-4ca5-a37b-f78a693fa541] Running
	I1202 16:16:16.929184  584973 system_pods.go:89] "kube-proxy-xqnrx" [d56d7371-0677-4746-972b-b3d24b8070f2] Running
	I1202 16:16:16.929188  584973 system_pods.go:89] "kube-scheduler-no-preload-534842" [73c10cb6-aaa7-4128-896f-16def63b750d] Running
	I1202 16:16:16.929195  584973 system_pods.go:89] "storage-provisioner" [15ec190a-3c61-47f3-87a1-c5ab08d312b1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 16:16:16.929236  584973 retry.go:31] will retry after 220.12248ms: missing components: kube-dns
	I1202 16:16:17.154142  584973 system_pods.go:86] 8 kube-system pods found
	I1202 16:16:17.154179  584973 system_pods.go:89] "coredns-7d764666f9-fxl4s" [7716bc36-76db-41a6-8acc-0025ea0b7787] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 16:16:17.154187  584973 system_pods.go:89] "etcd-no-preload-534842" [4156cb3a-c013-41ad-8c3e-4b32efbd243f] Running
	I1202 16:16:17.154194  584973 system_pods.go:89] "kindnet-fn84j" [e8f80ec9-4aff-4de8-aa5a-e262160e51d7] Running
	I1202 16:16:17.154199  584973 system_pods.go:89] "kube-apiserver-no-preload-534842" [d97ac382-11e9-48cd-8672-72fa466fd1d5] Running
	I1202 16:16:17.154204  584973 system_pods.go:89] "kube-controller-manager-no-preload-534842" [f7d15fa4-0c27-4ca5-a37b-f78a693fa541] Running
	I1202 16:16:17.154210  584973 system_pods.go:89] "kube-proxy-xqnrx" [d56d7371-0677-4746-972b-b3d24b8070f2] Running
	I1202 16:16:17.154215  584973 system_pods.go:89] "kube-scheduler-no-preload-534842" [73c10cb6-aaa7-4128-896f-16def63b750d] Running
	I1202 16:16:17.154222  584973 system_pods.go:89] "storage-provisioner" [15ec190a-3c61-47f3-87a1-c5ab08d312b1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 16:16:17.154242  584973 retry.go:31] will retry after 331.235999ms: missing components: kube-dns
	I1202 16:16:14.428219  601673 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1202 16:16:14.428485  601673 start.go:159] libmachine.API.Create for "default-k8s-diff-port-806420" (driver="docker")
	I1202 16:16:14.428511  601673 client.go:173] LocalClient.Create starting
	I1202 16:16:14.428579  601673 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem
	I1202 16:16:14.428613  601673 main.go:143] libmachine: Decoding PEM data...
	I1202 16:16:14.428629  601673 main.go:143] libmachine: Parsing certificate...
	I1202 16:16:14.428692  601673 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem
	I1202 16:16:14.428714  601673 main.go:143] libmachine: Decoding PEM data...
	I1202 16:16:14.428726  601673 main.go:143] libmachine: Parsing certificate...
	I1202 16:16:14.429043  601673 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-806420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1202 16:16:14.448324  601673 cli_runner.go:211] docker network inspect default-k8s-diff-port-806420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1202 16:16:14.448398  601673 network_create.go:284] running [docker network inspect default-k8s-diff-port-806420] to gather additional debugging logs...
	I1202 16:16:14.448414  601673 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-806420
	W1202 16:16:14.467616  601673 cli_runner.go:211] docker network inspect default-k8s-diff-port-806420 returned with exit code 1
	I1202 16:16:14.467644  601673 network_create.go:287] error running [docker network inspect default-k8s-diff-port-806420]: docker network inspect default-k8s-diff-port-806420: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-806420 not found
	I1202 16:16:14.467657  601673 network_create.go:289] output of [docker network inspect default-k8s-diff-port-806420]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-806420 not found
	
	** /stderr **
	I1202 16:16:14.467747  601673 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 16:16:14.486569  601673 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-59c4d474daec IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:20:cf:7a:79:c5} reservation:<nil>}
	I1202 16:16:14.487151  601673 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-208582b1a4af IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:26:5b:fe:2d:46:75} reservation:<nil>}
	I1202 16:16:14.487755  601673 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-230a00bd70ce IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:fe:8f:10:7f:8e:d3} reservation:<nil>}
	I1202 16:16:14.488333  601673 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-f242ea03e26e IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:3a:4d:9d:95:a5:56} reservation:<nil>}
	I1202 16:16:14.489070  601673 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cef390}
	I1202 16:16:14.489096  601673 network_create.go:124] attempt to create docker network default-k8s-diff-port-806420 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1202 16:16:14.489142  601673 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-806420 default-k8s-diff-port-806420
	I1202 16:16:14.543099  601673 network_create.go:108] docker network default-k8s-diff-port-806420 192.168.85.0/24 created
	I1202 16:16:14.543147  601673 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-806420" container
	I1202 16:16:14.543223  601673 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1202 16:16:14.562850  601673 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-806420 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-806420 --label created_by.minikube.sigs.k8s.io=true
	I1202 16:16:14.583757  601673 oci.go:103] Successfully created a docker volume default-k8s-diff-port-806420
	I1202 16:16:14.583833  601673 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-806420-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-806420 --entrypoint /usr/bin/test -v default-k8s-diff-port-806420:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1202 16:16:15.002273  601673 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-806420
	I1202 16:16:15.002360  601673 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 16:16:15.002376  601673 kic.go:194] Starting extracting preloaded images to volume ...
	I1202 16:16:15.002481  601673 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22021-264555/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-806420:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	I1202 16:16:19.095938  601673 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22021-264555/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-806420:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (4.093398371s)
	I1202 16:16:19.095980  601673 kic.go:203] duration metric: took 4.093597472s to extract preloaded images to volume ...
	W1202 16:16:19.096082  601673 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1202 16:16:19.096127  601673 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1202 16:16:19.096179  601673 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1202 16:16:19.161087  601673 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-806420 --name default-k8s-diff-port-806420 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-806420 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-806420 --network default-k8s-diff-port-806420 --ip 192.168.85.2 --volume default-k8s-diff-port-806420:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1202 16:16:17.490138  584973 system_pods.go:86] 8 kube-system pods found
	I1202 16:16:17.490177  584973 system_pods.go:89] "coredns-7d764666f9-fxl4s" [7716bc36-76db-41a6-8acc-0025ea0b7787] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 16:16:17.490185  584973 system_pods.go:89] "etcd-no-preload-534842" [4156cb3a-c013-41ad-8c3e-4b32efbd243f] Running
	I1202 16:16:17.490193  584973 system_pods.go:89] "kindnet-fn84j" [e8f80ec9-4aff-4de8-aa5a-e262160e51d7] Running
	I1202 16:16:17.490198  584973 system_pods.go:89] "kube-apiserver-no-preload-534842" [d97ac382-11e9-48cd-8672-72fa466fd1d5] Running
	I1202 16:16:17.490204  584973 system_pods.go:89] "kube-controller-manager-no-preload-534842" [f7d15fa4-0c27-4ca5-a37b-f78a693fa541] Running
	I1202 16:16:17.490210  584973 system_pods.go:89] "kube-proxy-xqnrx" [d56d7371-0677-4746-972b-b3d24b8070f2] Running
	I1202 16:16:17.490214  584973 system_pods.go:89] "kube-scheduler-no-preload-534842" [73c10cb6-aaa7-4128-896f-16def63b750d] Running
	I1202 16:16:17.490226  584973 system_pods.go:89] "storage-provisioner" [15ec190a-3c61-47f3-87a1-c5ab08d312b1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 16:16:17.490255  584973 retry.go:31] will retry after 427.721515ms: missing components: kube-dns
	I1202 16:16:17.923513  584973 system_pods.go:86] 8 kube-system pods found
	I1202 16:16:17.923554  584973 system_pods.go:89] "coredns-7d764666f9-fxl4s" [7716bc36-76db-41a6-8acc-0025ea0b7787] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 16:16:17.923564  584973 system_pods.go:89] "etcd-no-preload-534842" [4156cb3a-c013-41ad-8c3e-4b32efbd243f] Running
	I1202 16:16:17.923572  584973 system_pods.go:89] "kindnet-fn84j" [e8f80ec9-4aff-4de8-aa5a-e262160e51d7] Running
	I1202 16:16:17.923578  584973 system_pods.go:89] "kube-apiserver-no-preload-534842" [d97ac382-11e9-48cd-8672-72fa466fd1d5] Running
	I1202 16:16:17.923624  584973 system_pods.go:89] "kube-controller-manager-no-preload-534842" [f7d15fa4-0c27-4ca5-a37b-f78a693fa541] Running
	I1202 16:16:17.923635  584973 system_pods.go:89] "kube-proxy-xqnrx" [d56d7371-0677-4746-972b-b3d24b8070f2] Running
	I1202 16:16:17.923641  584973 system_pods.go:89] "kube-scheduler-no-preload-534842" [73c10cb6-aaa7-4128-896f-16def63b750d] Running
	I1202 16:16:17.923646  584973 system_pods.go:89] "storage-provisioner" [15ec190a-3c61-47f3-87a1-c5ab08d312b1] Running
	I1202 16:16:17.923655  584973 system_pods.go:126] duration metric: took 996.950812ms to wait for k8s-apps to be running ...
	I1202 16:16:17.923668  584973 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 16:16:17.923722  584973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 16:16:17.938289  584973 system_svc.go:56] duration metric: took 14.613001ms WaitForService to wait for kubelet
	I1202 16:16:17.938320  584973 kubeadm.go:587] duration metric: took 15.964129537s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 16:16:17.938344  584973 node_conditions.go:102] verifying NodePressure condition ...
	I1202 16:16:18.075882  584973 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 16:16:18.075919  584973 node_conditions.go:123] node cpu capacity is 8
	I1202 16:16:18.075940  584973 node_conditions.go:105] duration metric: took 137.589775ms to run NodePressure ...
	I1202 16:16:18.075956  584973 start.go:242] waiting for startup goroutines ...
	I1202 16:16:18.075966  584973 start.go:247] waiting for cluster config update ...
	I1202 16:16:18.076026  584973 start.go:256] writing updated cluster config ...
	I1202 16:16:18.076876  584973 ssh_runner.go:195] Run: rm -f paused
	I1202 16:16:18.082417  584973 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 16:16:18.124890  584973 pod_ready.go:83] waiting for pod "coredns-7d764666f9-fxl4s" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:16:18.170125  584973 pod_ready.go:94] pod "coredns-7d764666f9-fxl4s" is "Ready"
	I1202 16:16:18.170160  584973 pod_ready.go:86] duration metric: took 45.235203ms for pod "coredns-7d764666f9-fxl4s" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:16:18.172726  584973 pod_ready.go:83] waiting for pod "etcd-no-preload-534842" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:16:18.219736  584973 pod_ready.go:94] pod "etcd-no-preload-534842" is "Ready"
	I1202 16:16:18.219762  584973 pod_ready.go:86] duration metric: took 47.010393ms for pod "etcd-no-preload-534842" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:16:18.273996  584973 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-534842" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:16:18.278837  584973 pod_ready.go:94] pod "kube-apiserver-no-preload-534842" is "Ready"
	I1202 16:16:18.278865  584973 pod_ready.go:86] duration metric: took 4.833809ms for pod "kube-apiserver-no-preload-534842" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:16:18.281286  584973 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-534842" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:16:18.486813  584973 pod_ready.go:94] pod "kube-controller-manager-no-preload-534842" is "Ready"
	I1202 16:16:18.486842  584973 pod_ready.go:86] duration metric: took 205.53029ms for pod "kube-controller-manager-no-preload-534842" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:16:18.687204  584973 pod_ready.go:83] waiting for pod "kube-proxy-xqnrx" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:16:19.087513  584973 pod_ready.go:94] pod "kube-proxy-xqnrx" is "Ready"
	I1202 16:16:19.087542  584973 pod_ready.go:86] duration metric: took 400.310852ms for pod "kube-proxy-xqnrx" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:16:19.287892  584973 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-534842" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:16:19.687211  584973 pod_ready.go:94] pod "kube-scheduler-no-preload-534842" is "Ready"
	I1202 16:16:19.687244  584973 pod_ready.go:86] duration metric: took 399.325419ms for pod "kube-scheduler-no-preload-534842" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:16:19.687261  584973 pod_ready.go:40] duration metric: took 1.604793693s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 16:16:19.754680  584973 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1202 16:16:19.760999  584973 out.go:179] * Done! kubectl is now configured to use "no-preload-534842" cluster and "default" namespace by default
	I1202 16:16:16.586965  595674 out.go:252]   - Booting up control plane ...
	I1202 16:16:16.587098  595674 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 16:16:16.587227  595674 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 16:16:16.587918  595674 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 16:16:16.604237  595674 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 16:16:16.604343  595674 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 16:16:16.612211  595674 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 16:16:16.612373  595674 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 16:16:16.612465  595674 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 16:16:16.728880  595674 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 16:16:16.729061  595674 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 16:16:18.229715  595674 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.500978888s
	I1202 16:16:18.232625  595674 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1202 16:16:18.232774  595674 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1202 16:16:18.232903  595674 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1202 16:16:18.233042  595674 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1202 16:16:20.394553  595674 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.161747655s
	
	
	==> CRI-O <==
	Dec 02 16:16:09 old-k8s-version-380588 crio[769]: time="2025-12-02T16:16:09.800288456Z" level=info msg="Starting container: 357864372e14c9eb748d3344ed2c2447a6e37a6c14a0129adfb4116fd6a890be" id=948055b2-e43a-4fb6-9024-0227a10247ba name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 16:16:09 old-k8s-version-380588 crio[769]: time="2025-12-02T16:16:09.802349283Z" level=info msg="Started container" PID=2117 containerID=357864372e14c9eb748d3344ed2c2447a6e37a6c14a0129adfb4116fd6a890be description=kube-system/coredns-5dd5756b68-fsfh2/coredns id=948055b2-e43a-4fb6-9024-0227a10247ba name=/runtime.v1.RuntimeService/StartContainer sandboxID=b7d8d539dfb4a8591777fb54dcc2504ccf0cb80cb11f9fddf5da82b6af37fde2
	Dec 02 16:16:13 old-k8s-version-380588 crio[769]: time="2025-12-02T16:16:13.264474081Z" level=info msg="Running pod sandbox: default/busybox/POD" id=d7d1e34c-cbef-4caa-9b53-bc1befed4834 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 16:16:13 old-k8s-version-380588 crio[769]: time="2025-12-02T16:16:13.264554959Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:16:13 old-k8s-version-380588 crio[769]: time="2025-12-02T16:16:13.269765107Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f61153492bd97fcbadbd6f95e9fce0d29c774d76bf8b9df6590786ab7018b4de UID:44dc2786-babf-4e74-89be-27670ac97906 NetNS:/var/run/netns/f47eb776-c7a5-460b-9936-589b3d47a49f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0004805a0}] Aliases:map[]}"
	Dec 02 16:16:13 old-k8s-version-380588 crio[769]: time="2025-12-02T16:16:13.269802306Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 02 16:16:13 old-k8s-version-380588 crio[769]: time="2025-12-02T16:16:13.279892605Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f61153492bd97fcbadbd6f95e9fce0d29c774d76bf8b9df6590786ab7018b4de UID:44dc2786-babf-4e74-89be-27670ac97906 NetNS:/var/run/netns/f47eb776-c7a5-460b-9936-589b3d47a49f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0004805a0}] Aliases:map[]}"
	Dec 02 16:16:13 old-k8s-version-380588 crio[769]: time="2025-12-02T16:16:13.280085371Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 02 16:16:13 old-k8s-version-380588 crio[769]: time="2025-12-02T16:16:13.28133937Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 02 16:16:13 old-k8s-version-380588 crio[769]: time="2025-12-02T16:16:13.282316781Z" level=info msg="Ran pod sandbox f61153492bd97fcbadbd6f95e9fce0d29c774d76bf8b9df6590786ab7018b4de with infra container: default/busybox/POD" id=d7d1e34c-cbef-4caa-9b53-bc1befed4834 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 16:16:13 old-k8s-version-380588 crio[769]: time="2025-12-02T16:16:13.28363591Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c9d9e515-6be6-40a9-9172-301d64c78c3c name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:16:13 old-k8s-version-380588 crio[769]: time="2025-12-02T16:16:13.283773165Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=c9d9e515-6be6-40a9-9172-301d64c78c3c name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:16:13 old-k8s-version-380588 crio[769]: time="2025-12-02T16:16:13.283834584Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=c9d9e515-6be6-40a9-9172-301d64c78c3c name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:16:13 old-k8s-version-380588 crio[769]: time="2025-12-02T16:16:13.284385045Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4cc44e89-22ee-4f87-8eb3-cb382661bd1b name=/runtime.v1.ImageService/PullImage
	Dec 02 16:16:13 old-k8s-version-380588 crio[769]: time="2025-12-02T16:16:13.287362503Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 02 16:16:15 old-k8s-version-380588 crio[769]: time="2025-12-02T16:16:15.327457352Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=4cc44e89-22ee-4f87-8eb3-cb382661bd1b name=/runtime.v1.ImageService/PullImage
	Dec 02 16:16:15 old-k8s-version-380588 crio[769]: time="2025-12-02T16:16:15.328406025Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e7e57aab-6cf7-41b2-be23-4c099d7770c1 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:16:15 old-k8s-version-380588 crio[769]: time="2025-12-02T16:16:15.330097733Z" level=info msg="Creating container: default/busybox/busybox" id=281c4160-9439-4490-8e1d-9a69411f132a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:16:15 old-k8s-version-380588 crio[769]: time="2025-12-02T16:16:15.330208449Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:16:15 old-k8s-version-380588 crio[769]: time="2025-12-02T16:16:15.335402042Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:16:15 old-k8s-version-380588 crio[769]: time="2025-12-02T16:16:15.335842644Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:16:15 old-k8s-version-380588 crio[769]: time="2025-12-02T16:16:15.369929427Z" level=info msg="Created container dc02a7ec393dfd796160e01f0c9cc6e31b1dbd6438284178892e9625184c4a04: default/busybox/busybox" id=281c4160-9439-4490-8e1d-9a69411f132a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:16:15 old-k8s-version-380588 crio[769]: time="2025-12-02T16:16:15.370629007Z" level=info msg="Starting container: dc02a7ec393dfd796160e01f0c9cc6e31b1dbd6438284178892e9625184c4a04" id=3464b125-a0ee-49bb-adad-f4e5d03146c6 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 16:16:15 old-k8s-version-380588 crio[769]: time="2025-12-02T16:16:15.372935208Z" level=info msg="Started container" PID=2192 containerID=dc02a7ec393dfd796160e01f0c9cc6e31b1dbd6438284178892e9625184c4a04 description=default/busybox/busybox id=3464b125-a0ee-49bb-adad-f4e5d03146c6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f61153492bd97fcbadbd6f95e9fce0d29c774d76bf8b9df6590786ab7018b4de
	Dec 02 16:16:21 old-k8s-version-380588 crio[769]: time="2025-12-02T16:16:21.066963778Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	dc02a7ec393df       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   f61153492bd97       busybox                                          default
	357864372e14c       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      12 seconds ago      Running             coredns                   0                   b7d8d539dfb4a       coredns-5dd5756b68-fsfh2                         kube-system
	84d5fc5ab7a08       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   6a05943b1ef16       storage-provisioner                              kube-system
	713b10cb218ba       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    24 seconds ago      Running             kindnet-cni               0                   61cb3fbfa60ce       kindnet-cd4m6                                    kube-system
	d4a2d70b42b80       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      26 seconds ago      Running             kube-proxy                0                   620853715ce27       kube-proxy-jqstm                                 kube-system
	add3fab3deffc       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      45 seconds ago      Running             kube-apiserver            0                   d377d6adb9650       kube-apiserver-old-k8s-version-380588            kube-system
	4b8cd75be6715       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      45 seconds ago      Running             kube-scheduler            0                   3f99576047b1d       kube-scheduler-old-k8s-version-380588            kube-system
	42fdc2aec1f67       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      45 seconds ago      Running             kube-controller-manager   0                   4c4505bf9cffd       kube-controller-manager-old-k8s-version-380588   kube-system
	24735e1e02fca       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      45 seconds ago      Running             etcd                      0                   b5b75e93a60c5       etcd-old-k8s-version-380588                      kube-system
	
	
	==> coredns [357864372e14c9eb748d3344ed2c2447a6e37a6c14a0129adfb4116fd6a890be] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41607 - 45129 "HINFO IN 1678909611564793220.8430159358173592376. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.02517019s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-380588
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-380588
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=old-k8s-version-380588
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T16_15_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 16:15:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-380588
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 16:16:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 16:16:13 +0000   Tue, 02 Dec 2025 16:15:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 16:16:13 +0000   Tue, 02 Dec 2025 16:15:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 16:16:13 +0000   Tue, 02 Dec 2025 16:15:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 16:16:13 +0000   Tue, 02 Dec 2025 16:16:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-380588
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                c883883f-eefb-4ccc-83df-e6ee2918146f
	  Boot ID:                    e00bac56-b076-4861-bc22-5d3b11269f73
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-5dd5756b68-fsfh2                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-old-k8s-version-380588                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         40s
	  kube-system                 kindnet-cd4m6                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-old-k8s-version-380588             250m (3%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-old-k8s-version-380588    200m (2%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-proxy-jqstm                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-old-k8s-version-380588             100m (1%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 40s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  40s   kubelet          Node old-k8s-version-380588 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s   kubelet          Node old-k8s-version-380588 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s   kubelet          Node old-k8s-version-380588 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s   node-controller  Node old-k8s-version-380588 event: Registered Node old-k8s-version-380588 in Controller
	  Normal  NodeReady                13s   kubelet          Node old-k8s-version-380588 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000023] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[Dec 2 16:14] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ca bc 15 8e 4f 39 08 06
	[  +0.202375] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4a 25 86 21 45 76 08 06
	[  +7.441346] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 50 97 74 77 f9 08 06
	[  +0.000311] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 8c 8a 4d de f7 08 06
	[Dec 2 16:15] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 87 56 d2 46 1b 08 06
	[  +0.000909] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4a 25 86 21 45 76 08 06
	[  +7.449328] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a 06 ef 04 0a 22 08 06
	[ +17.731920] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff ae 8e 5c 48 83 60 08 06
	[  +2.165442] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0e 0b db fb 54 af 08 06
	[  +0.000320] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 3a 06 ef 04 0a 22 08 06
	[ +14.651928] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 5d 2d 15 78 ec 08 06
	[  +0.000385] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 8e 5c 48 83 60 08 06
	
	
	==> etcd [24735e1e02fca4a1142bfa387f906c9a3c25068f0e4880598c9cb232390421eb] <==
	{"level":"info","ts":"2025-12-02T16:15:37.236686Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2025-12-02T16:15:37.237159Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2025-12-02T16:15:37.240554Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-02T16:15:37.240782Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-02T16:15:37.240811Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-02T16:15:37.241045Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-02T16:15:37.241076Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-02T16:15:38.225936Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-02T16:15:38.226012Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-02T16:15:38.226044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 1"}
	{"level":"info","ts":"2025-12-02T16:15:38.226065Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 2"}
	{"level":"info","ts":"2025-12-02T16:15:38.226072Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-02T16:15:38.226083Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 2"}
	{"level":"info","ts":"2025-12-02T16:15:38.226093Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-02T16:15:38.227122Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-380588 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-02T16:15:38.227149Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-02T16:15:38.22741Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-02T16:15:38.22747Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-02T16:15:38.227194Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-02T16:15:38.227576Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-02T16:15:38.228454Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-02T16:15:38.228633Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-02T16:15:38.228953Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-12-02T16:15:38.228665Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-02T16:15:38.229158Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 16:16:22 up  2:58,  0 user,  load average: 5.90, 4.35, 2.64
	Linux old-k8s-version-380588 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [713b10cb218ba740bb63d47138b0d4cfb5b7e07111a1d94d8de2a72cc26325f3] <==
	I1202 16:15:58.774416       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 16:15:58.849404       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1202 16:15:58.849612       1 main.go:148] setting mtu 1500 for CNI 
	I1202 16:15:58.849636       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 16:15:58.849668       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T16:15:59Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 16:15:59.051529       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 16:15:59.051647       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 16:15:59.051684       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 16:15:59.052665       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1202 16:15:59.372050       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 16:15:59.372073       1 metrics.go:72] Registering metrics
	I1202 16:15:59.372119       1 controller.go:711] "Syncing nftables rules"
	I1202 16:16:09.059565       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1202 16:16:09.059642       1 main.go:301] handling current node
	I1202 16:16:19.051825       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1202 16:16:19.051858       1 main.go:301] handling current node
	
	
	==> kube-apiserver [add3fab3deffcf3e974791dbe89d14033cf7c3ecabecd4f3f0c8611990a15c11] <==
	I1202 16:15:39.430691       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1202 16:15:39.430698       1 cache.go:39] Caches are synced for autoregister controller
	I1202 16:15:39.430704       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1202 16:15:39.430739       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1202 16:15:39.430844       1 shared_informer.go:318] Caches are synced for configmaps
	I1202 16:15:39.431165       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1202 16:15:39.431183       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1202 16:15:39.433723       1 controller.go:624] quota admission added evaluator for: namespaces
	I1202 16:15:39.439319       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1202 16:15:39.442520       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 16:15:40.336385       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1202 16:15:40.342624       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1202 16:15:40.342643       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1202 16:15:40.802337       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 16:15:40.840001       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 16:15:40.941450       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1202 16:15:40.947217       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1202 16:15:40.948229       1 controller.go:624] quota admission added evaluator for: endpoints
	I1202 16:15:40.952160       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 16:15:41.392501       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1202 16:15:42.550675       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1202 16:15:42.565817       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1202 16:15:42.576013       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1202 16:15:55.506128       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1202 16:15:55.513030       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [42fdc2aec1f677dc0d5403a26213f5c14527225158ba82f2680d26f25396f7de] <==
	I1202 16:15:55.540602       1 shared_informer.go:318] Caches are synced for HPA
	I1202 16:15:55.541227       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-fsfh2"
	I1202 16:15:55.547493       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="34.675432ms"
	I1202 16:15:55.556006       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.459921ms"
	I1202 16:15:55.556123       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="78.233µs"
	I1202 16:15:55.557909       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="132.691µs"
	I1202 16:15:55.574936       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I1202 16:15:55.640597       1 shared_informer.go:318] Caches are synced for disruption
	I1202 16:15:55.644065       1 shared_informer.go:318] Caches are synced for resource quota
	I1202 16:15:55.681862       1 shared_informer.go:318] Caches are synced for resource quota
	I1202 16:15:55.990207       1 shared_informer.go:318] Caches are synced for garbage collector
	I1202 16:15:55.990255       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1202 16:15:56.015327       1 shared_informer.go:318] Caches are synced for garbage collector
	I1202 16:15:56.369044       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1202 16:15:56.382251       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-jznd6"
	I1202 16:15:56.397847       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="30.130241ms"
	I1202 16:15:56.408094       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.179041ms"
	I1202 16:15:56.408222       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="75.659µs"
	I1202 16:16:09.433209       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="116.716µs"
	I1202 16:16:09.445029       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="124.52µs"
	I1202 16:16:10.474516       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1202 16:16:10.474898       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68-fsfh2" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-5dd5756b68-fsfh2"
	I1202 16:16:10.474930       1 event.go:307] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I1202 16:16:10.772576       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.030221ms"
	I1202 16:16:10.772821       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="84.147µs"
	
	
	==> kube-proxy [d4a2d70b42b806a94367eb6230158a4ba2ffad773d878be788de8b1f930d1410] <==
	I1202 16:15:55.939898       1 server_others.go:69] "Using iptables proxy"
	I1202 16:15:55.957685       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1202 16:15:56.040060       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 16:15:56.046707       1 server_others.go:152] "Using iptables Proxier"
	I1202 16:15:56.046895       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1202 16:15:56.046963       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1202 16:15:56.047016       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1202 16:15:56.047301       1 server.go:846] "Version info" version="v1.28.0"
	I1202 16:15:56.047674       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 16:15:56.049052       1 config.go:188] "Starting service config controller"
	I1202 16:15:56.051368       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1202 16:15:56.049207       1 config.go:97] "Starting endpoint slice config controller"
	I1202 16:15:56.051445       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1202 16:15:56.049786       1 config.go:315] "Starting node config controller"
	I1202 16:15:56.051944       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1202 16:15:56.151600       1 shared_informer.go:318] Caches are synced for service config
	I1202 16:15:56.152138       1 shared_informer.go:318] Caches are synced for node config
	I1202 16:15:56.152249       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [4b8cd75be67154a20904caec099699306760ace116ee6d984946073e5940f8e8] <==
	W1202 16:15:39.408753       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1202 16:15:39.410199       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1202 16:15:39.410211       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1202 16:15:39.410218       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1202 16:15:39.408893       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1202 16:15:39.410241       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1202 16:15:40.216416       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1202 16:15:40.216462       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1202 16:15:40.323502       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1202 16:15:40.323537       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1202 16:15:40.439609       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1202 16:15:40.439643       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1202 16:15:40.460046       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1202 16:15:40.460083       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1202 16:15:40.493470       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1202 16:15:40.493513       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1202 16:15:40.523616       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1202 16:15:40.523666       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1202 16:15:40.543951       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1202 16:15:40.543991       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1202 16:15:40.581649       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1202 16:15:40.581760       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1202 16:15:40.624476       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1202 16:15:40.624510       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I1202 16:15:40.904751       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 02 16:15:55 old-k8s-version-380588 kubelet[1364]: I1202 16:15:55.529806    1364 topology_manager.go:215] "Topology Admit Handler" podUID="c32e74d7-f05f-4cbc-940e-bf5ce7f65de8" podNamespace="kube-system" podName="kube-proxy-jqstm"
	Dec 02 16:15:55 old-k8s-version-380588 kubelet[1364]: I1202 16:15:55.566265    1364 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 02 16:15:55 old-k8s-version-380588 kubelet[1364]: I1202 16:15:55.567120    1364 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 02 16:15:55 old-k8s-version-380588 kubelet[1364]: I1202 16:15:55.598826    1364 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b00824ca-1af5-4aa6-b0a8-09f83c30bf49-cni-cfg\") pod \"kindnet-cd4m6\" (UID: \"b00824ca-1af5-4aa6-b0a8-09f83c30bf49\") " pod="kube-system/kindnet-cd4m6"
	Dec 02 16:15:55 old-k8s-version-380588 kubelet[1364]: I1202 16:15:55.598894    1364 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2jhh\" (UniqueName: \"kubernetes.io/projected/c32e74d7-f05f-4cbc-940e-bf5ce7f65de8-kube-api-access-m2jhh\") pod \"kube-proxy-jqstm\" (UID: \"c32e74d7-f05f-4cbc-940e-bf5ce7f65de8\") " pod="kube-system/kube-proxy-jqstm"
	Dec 02 16:15:55 old-k8s-version-380588 kubelet[1364]: I1202 16:15:55.599035    1364 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b00824ca-1af5-4aa6-b0a8-09f83c30bf49-xtables-lock\") pod \"kindnet-cd4m6\" (UID: \"b00824ca-1af5-4aa6-b0a8-09f83c30bf49\") " pod="kube-system/kindnet-cd4m6"
	Dec 02 16:15:55 old-k8s-version-380588 kubelet[1364]: I1202 16:15:55.599084    1364 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxjxd\" (UniqueName: \"kubernetes.io/projected/b00824ca-1af5-4aa6-b0a8-09f83c30bf49-kube-api-access-kxjxd\") pod \"kindnet-cd4m6\" (UID: \"b00824ca-1af5-4aa6-b0a8-09f83c30bf49\") " pod="kube-system/kindnet-cd4m6"
	Dec 02 16:15:55 old-k8s-version-380588 kubelet[1364]: I1202 16:15:55.599120    1364 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c32e74d7-f05f-4cbc-940e-bf5ce7f65de8-kube-proxy\") pod \"kube-proxy-jqstm\" (UID: \"c32e74d7-f05f-4cbc-940e-bf5ce7f65de8\") " pod="kube-system/kube-proxy-jqstm"
	Dec 02 16:15:55 old-k8s-version-380588 kubelet[1364]: I1202 16:15:55.599160    1364 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c32e74d7-f05f-4cbc-940e-bf5ce7f65de8-xtables-lock\") pod \"kube-proxy-jqstm\" (UID: \"c32e74d7-f05f-4cbc-940e-bf5ce7f65de8\") " pod="kube-system/kube-proxy-jqstm"
	Dec 02 16:15:55 old-k8s-version-380588 kubelet[1364]: I1202 16:15:55.599191    1364 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c32e74d7-f05f-4cbc-940e-bf5ce7f65de8-lib-modules\") pod \"kube-proxy-jqstm\" (UID: \"c32e74d7-f05f-4cbc-940e-bf5ce7f65de8\") " pod="kube-system/kube-proxy-jqstm"
	Dec 02 16:15:55 old-k8s-version-380588 kubelet[1364]: I1202 16:15:55.599246    1364 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b00824ca-1af5-4aa6-b0a8-09f83c30bf49-lib-modules\") pod \"kindnet-cd4m6\" (UID: \"b00824ca-1af5-4aa6-b0a8-09f83c30bf49\") " pod="kube-system/kindnet-cd4m6"
	Dec 02 16:15:56 old-k8s-version-380588 kubelet[1364]: I1202 16:15:56.694175    1364 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-jqstm" podStartSLOduration=1.69410471 podCreationTimestamp="2025-12-02 16:15:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 16:15:56.693945097 +0000 UTC m=+14.170787889" watchObservedRunningTime="2025-12-02 16:15:56.69410471 +0000 UTC m=+14.170947484"
	Dec 02 16:15:58 old-k8s-version-380588 kubelet[1364]: I1202 16:15:58.709205    1364 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-cd4m6" podStartSLOduration=1.018594786 podCreationTimestamp="2025-12-02 16:15:55 +0000 UTC" firstStartedPulling="2025-12-02 16:15:55.841491494 +0000 UTC m=+13.318334267" lastFinishedPulling="2025-12-02 16:15:58.532044327 +0000 UTC m=+16.008887090" observedRunningTime="2025-12-02 16:15:58.708871283 +0000 UTC m=+16.185714056" watchObservedRunningTime="2025-12-02 16:15:58.709147609 +0000 UTC m=+16.185990383"
	Dec 02 16:16:09 old-k8s-version-380588 kubelet[1364]: I1202 16:16:09.404797    1364 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 02 16:16:09 old-k8s-version-380588 kubelet[1364]: I1202 16:16:09.433161    1364 topology_manager.go:215] "Topology Admit Handler" podUID="b7a09569-0c93-481f-9bf0-4c943f83bcb2" podNamespace="kube-system" podName="coredns-5dd5756b68-fsfh2"
	Dec 02 16:16:09 old-k8s-version-380588 kubelet[1364]: I1202 16:16:09.434921    1364 topology_manager.go:215] "Topology Admit Handler" podUID="de6d872c-38c7-4bfa-a997-52fcc9c64976" podNamespace="kube-system" podName="storage-provisioner"
	Dec 02 16:16:09 old-k8s-version-380588 kubelet[1364]: I1202 16:16:09.599325    1364 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kftfc\" (UniqueName: \"kubernetes.io/projected/b7a09569-0c93-481f-9bf0-4c943f83bcb2-kube-api-access-kftfc\") pod \"coredns-5dd5756b68-fsfh2\" (UID: \"b7a09569-0c93-481f-9bf0-4c943f83bcb2\") " pod="kube-system/coredns-5dd5756b68-fsfh2"
	Dec 02 16:16:09 old-k8s-version-380588 kubelet[1364]: I1202 16:16:09.599388    1364 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/de6d872c-38c7-4bfa-a997-52fcc9c64976-tmp\") pod \"storage-provisioner\" (UID: \"de6d872c-38c7-4bfa-a997-52fcc9c64976\") " pod="kube-system/storage-provisioner"
	Dec 02 16:16:09 old-k8s-version-380588 kubelet[1364]: I1202 16:16:09.599452    1364 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7a09569-0c93-481f-9bf0-4c943f83bcb2-config-volume\") pod \"coredns-5dd5756b68-fsfh2\" (UID: \"b7a09569-0c93-481f-9bf0-4c943f83bcb2\") " pod="kube-system/coredns-5dd5756b68-fsfh2"
	Dec 02 16:16:09 old-k8s-version-380588 kubelet[1364]: I1202 16:16:09.599593    1364 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gvt6\" (UniqueName: \"kubernetes.io/projected/de6d872c-38c7-4bfa-a997-52fcc9c64976-kube-api-access-9gvt6\") pod \"storage-provisioner\" (UID: \"de6d872c-38c7-4bfa-a997-52fcc9c64976\") " pod="kube-system/storage-provisioner"
	Dec 02 16:16:10 old-k8s-version-380588 kubelet[1364]: I1202 16:16:10.743325    1364 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.743267985 podCreationTimestamp="2025-12-02 16:15:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 16:16:10.743029253 +0000 UTC m=+28.219872026" watchObservedRunningTime="2025-12-02 16:16:10.743267985 +0000 UTC m=+28.220110760"
	Dec 02 16:16:12 old-k8s-version-380588 kubelet[1364]: I1202 16:16:12.962139    1364 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-fsfh2" podStartSLOduration=17.962081626 podCreationTimestamp="2025-12-02 16:15:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 16:16:10.762027288 +0000 UTC m=+28.238870079" watchObservedRunningTime="2025-12-02 16:16:12.962081626 +0000 UTC m=+30.438924420"
	Dec 02 16:16:12 old-k8s-version-380588 kubelet[1364]: I1202 16:16:12.962337    1364 topology_manager.go:215] "Topology Admit Handler" podUID="44dc2786-babf-4e74-89be-27670ac97906" podNamespace="default" podName="busybox"
	Dec 02 16:16:13 old-k8s-version-380588 kubelet[1364]: I1202 16:16:13.122132    1364 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n84wg\" (UniqueName: \"kubernetes.io/projected/44dc2786-babf-4e74-89be-27670ac97906-kube-api-access-n84wg\") pod \"busybox\" (UID: \"44dc2786-babf-4e74-89be-27670ac97906\") " pod="default/busybox"
	Dec 02 16:16:15 old-k8s-version-380588 kubelet[1364]: I1202 16:16:15.756351    1364 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.71251437 podCreationTimestamp="2025-12-02 16:16:12 +0000 UTC" firstStartedPulling="2025-12-02 16:16:13.284032004 +0000 UTC m=+30.760874771" lastFinishedPulling="2025-12-02 16:16:15.327812778 +0000 UTC m=+32.804655548" observedRunningTime="2025-12-02 16:16:15.756185829 +0000 UTC m=+33.233028603" watchObservedRunningTime="2025-12-02 16:16:15.756295147 +0000 UTC m=+33.233137920"
	
	
	==> storage-provisioner [84d5fc5ab7a0853401edc6003a88bb1ca6e8b510fa8783ec27d9df3a85577218] <==
	I1202 16:16:09.813814       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1202 16:16:09.823561       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1202 16:16:09.823648       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1202 16:16:09.831761       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1202 16:16:09.832128       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-380588_c73fec73-486d-4194-b211-44ff2d7ea3ab!
	I1202 16:16:09.832624       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fc3102a2-5536-4fab-baaf-1e9e658904c7", APIVersion:"v1", ResourceVersion:"393", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-380588_c73fec73-486d-4194-b211-44ff2d7ea3ab became leader
	I1202 16:16:09.932590       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-380588_c73fec73-486d-4194-b211-44ff2d7ea3ab!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-380588 -n old-k8s-version-380588
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-380588 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-534842 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-534842 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (407.586245ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T16:16:29Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-534842 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-534842 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-534842 describe deploy/metrics-server -n kube-system: exit status 1 (111.77056ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-534842 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-534842
helpers_test.go:243: (dbg) docker inspect no-preload-534842:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a2904e47fdbbff69e8d2d0c47a8f31f01acd643bd1e30ffb705ef9b28bc00aaa",
	        "Created": "2025-12-02T16:15:33.245538199Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 585639,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T16:15:33.28127699Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/a2904e47fdbbff69e8d2d0c47a8f31f01acd643bd1e30ffb705ef9b28bc00aaa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a2904e47fdbbff69e8d2d0c47a8f31f01acd643bd1e30ffb705ef9b28bc00aaa/hostname",
	        "HostsPath": "/var/lib/docker/containers/a2904e47fdbbff69e8d2d0c47a8f31f01acd643bd1e30ffb705ef9b28bc00aaa/hosts",
	        "LogPath": "/var/lib/docker/containers/a2904e47fdbbff69e8d2d0c47a8f31f01acd643bd1e30ffb705ef9b28bc00aaa/a2904e47fdbbff69e8d2d0c47a8f31f01acd643bd1e30ffb705ef9b28bc00aaa-json.log",
	        "Name": "/no-preload-534842",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-534842:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-534842",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a2904e47fdbbff69e8d2d0c47a8f31f01acd643bd1e30ffb705ef9b28bc00aaa",
	                "LowerDir": "/var/lib/docker/overlay2/6a4b4e52df8b38f90b870d8719f5a1cc2a12c6d10fe8621038d418792a62b0c1-init/diff:/var/lib/docker/overlay2/ab98578cee54140c21ba2edb7c02601b9799fbaa027f05ce4daaae66d198c082/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6a4b4e52df8b38f90b870d8719f5a1cc2a12c6d10fe8621038d418792a62b0c1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6a4b4e52df8b38f90b870d8719f5a1cc2a12c6d10fe8621038d418792a62b0c1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6a4b4e52df8b38f90b870d8719f5a1cc2a12c6d10fe8621038d418792a62b0c1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-534842",
	                "Source": "/var/lib/docker/volumes/no-preload-534842/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-534842",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-534842",
	                "name.minikube.sigs.k8s.io": "no-preload-534842",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "1344e38070a1c374115f3489dbcc6300acc639f6459b921b767754e3e8c3035b",
	            "SandboxKey": "/var/run/docker/netns/1344e38070a1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33223"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33224"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33228"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33225"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33227"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-534842": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "26f54f8ab80db170a83b2dc1c670501109df1b38f3efe9f0b57bf1b09b594ad5",
	                    "EndpointID": "0f9465e11ba11bd3d3d447c850ebb8ff7333ebe9ba312233c17265e56f887f82",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "52:b1:e7:57:8a:23",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-534842",
	                        "a2904e47fdbb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-534842 -n no-preload-534842
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-534842 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-534842 logs -n 25: (1.256359148s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-589300 sudo cat /var/lib/kubelet/config.yaml                                                                                                                   │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ ssh     │ -p bridge-589300 sudo systemctl status docker --all --full --no-pager                                                                                                    │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	│ ssh     │ -p bridge-589300 sudo systemctl cat docker --no-pager                                                                                                                    │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ ssh     │ -p bridge-589300 sudo cat /etc/docker/daemon.json                                                                                                                        │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	│ ssh     │ -p bridge-589300 sudo docker system info                                                                                                                                 │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	│ ssh     │ -p bridge-589300 sudo systemctl status cri-docker --all --full --no-pager                                                                                                │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	│ ssh     │ -p bridge-589300 sudo systemctl cat cri-docker --no-pager                                                                                                                │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ ssh     │ -p bridge-589300 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                           │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	│ ssh     │ -p bridge-589300 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                     │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ ssh     │ -p bridge-589300 sudo cri-dockerd --version                                                                                                                              │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ ssh     │ -p bridge-589300 sudo systemctl status containerd --all --full --no-pager                                                                                                │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	│ ssh     │ -p bridge-589300 sudo systemctl cat containerd --no-pager                                                                                                                │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ ssh     │ -p bridge-589300 sudo cat /lib/systemd/system/containerd.service                                                                                                         │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ ssh     │ -p bridge-589300 sudo cat /etc/containerd/config.toml                                                                                                                    │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ ssh     │ -p bridge-589300 sudo containerd config dump                                                                                                                             │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ ssh     │ -p bridge-589300 sudo systemctl status crio --all --full --no-pager                                                                                                      │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ ssh     │ -p bridge-589300 sudo systemctl cat crio --no-pager                                                                                                                      │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ ssh     │ -p bridge-589300 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                            │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ ssh     │ -p bridge-589300 sudo crio config                                                                                                                                        │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ delete  │ -p bridge-589300                                                                                                                                                         │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ delete  │ -p disable-driver-mounts-904481                                                                                                                                          │ disable-driver-mounts-904481 │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ start   │ -p default-k8s-diff-port-806420 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2 │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-380588 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                             │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	│ stop    │ -p old-k8s-version-380588 --alsologtostderr -v=3                                                                                                                         │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-534842 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 16:16:14
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 16:16:14.209718  601673 out.go:360] Setting OutFile to fd 1 ...
	I1202 16:16:14.209821  601673 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:16:14.209830  601673 out.go:374] Setting ErrFile to fd 2...
	I1202 16:16:14.209834  601673 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:16:14.210072  601673 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 16:16:14.210617  601673 out.go:368] Setting JSON to false
	I1202 16:16:14.211833  601673 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":10715,"bootTime":1764681459,"procs":316,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 16:16:14.211891  601673 start.go:143] virtualization: kvm guest
	I1202 16:16:14.214072  601673 out.go:179] * [default-k8s-diff-port-806420] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 16:16:14.215581  601673 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 16:16:14.215590  601673 notify.go:221] Checking for updates...
	I1202 16:16:14.218253  601673 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 16:16:14.219997  601673 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 16:16:14.221278  601673 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-264555/.minikube
	I1202 16:16:14.222622  601673 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 16:16:14.223868  601673 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 16:16:14.225645  601673 config.go:182] Loaded profile config "embed-certs-046271": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 16:16:14.225795  601673 config.go:182] Loaded profile config "no-preload-534842": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 16:16:14.225922  601673 config.go:182] Loaded profile config "old-k8s-version-380588": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1202 16:16:14.226040  601673 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 16:16:14.251945  601673 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 16:16:14.252053  601673 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 16:16:14.323181  601673 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-02 16:16:14.311584435 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 16:16:14.323275  601673 docker.go:319] overlay module found
	I1202 16:16:14.325245  601673 out.go:179] * Using the docker driver based on user configuration
	I1202 16:16:14.326499  601673 start.go:309] selected driver: docker
	I1202 16:16:14.326520  601673 start.go:927] validating driver "docker" against <nil>
	I1202 16:16:14.326535  601673 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 16:16:14.327262  601673 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 16:16:14.392849  601673 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-02 16:16:14.382511545 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 16:16:14.393101  601673 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 16:16:14.393473  601673 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 16:16:14.395493  601673 out.go:179] * Using Docker driver with root privileges
	I1202 16:16:14.396988  601673 cni.go:84] Creating CNI manager for ""
	I1202 16:16:14.397076  601673 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 16:16:14.397091  601673 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 16:16:14.397187  601673 start.go:353] cluster config:
	{Name:default-k8s-diff-port-806420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-806420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 16:16:14.398582  601673 out.go:179] * Starting "default-k8s-diff-port-806420" primary control-plane node in "default-k8s-diff-port-806420" cluster
	I1202 16:16:14.399660  601673 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 16:16:14.401383  601673 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 16:16:14.402693  601673 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 16:16:14.402731  601673 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22021-264555/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1202 16:16:14.402743  601673 cache.go:65] Caching tarball of preloaded images
	I1202 16:16:14.402785  601673 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 16:16:14.402839  601673 preload.go:238] Found /home/jenkins/minikube-integration/22021-264555/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 16:16:14.402853  601673 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 16:16:14.402957  601673 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/config.json ...
	I1202 16:16:14.402979  601673 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/config.json: {Name:mkddee2b359a6629d691bd3c15cafa759bf3a2ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:16:14.425924  601673 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 16:16:14.425951  601673 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 16:16:14.425971  601673 cache.go:243] Successfully downloaded all kic artifacts
	I1202 16:16:14.426010  601673 start.go:360] acquireMachinesLock for default-k8s-diff-port-806420: {Name:mk8a961b68c6bbf9b1910f8ae43c90e49f86c0f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:16:14.426130  601673 start.go:364] duration metric: took 97.185µs to acquireMachinesLock for "default-k8s-diff-port-806420"
	I1202 16:16:14.426162  601673 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-806420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-806420 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 16:16:14.426257  601673 start.go:125] createHost starting for "" (driver="docker")
	I1202 16:16:12.110881  595674 out.go:252]   - Generating certificates and keys ...
	I1202 16:16:12.111007  595674 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 16:16:12.111093  595674 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 16:16:12.165735  595674 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1202 16:16:12.445729  595674 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1202 16:16:12.855159  595674 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1202 16:16:13.031505  595674 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1202 16:16:13.203185  595674 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1202 16:16:13.203385  595674 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-046271 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1202 16:16:13.732495  595674 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1202 16:16:13.732716  595674 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-046271 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1202 16:16:13.976699  595674 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1202 16:16:14.213536  595674 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1202 16:16:14.570978  595674 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1202 16:16:14.571154  595674 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 16:16:14.742176  595674 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 16:16:15.663038  595674 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 16:16:15.949044  595674 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 16:16:16.421414  595674 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 16:16:16.578550  595674 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 16:16:16.579307  595674 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 16:16:16.585128  595674 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1202 16:16:13.902752  584973 node_ready.go:57] node "no-preload-534842" has "Ready":"False" status (will retry)
	W1202 16:16:16.402531  584973 node_ready.go:57] node "no-preload-534842" has "Ready":"False" status (will retry)
	I1202 16:16:16.901976  584973 node_ready.go:49] node "no-preload-534842" is "Ready"
	I1202 16:16:16.902011  584973 node_ready.go:38] duration metric: took 14.503217219s for node "no-preload-534842" to be "Ready" ...
	I1202 16:16:16.902029  584973 api_server.go:52] waiting for apiserver process to appear ...
	I1202 16:16:16.902086  584973 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 16:16:16.914832  584973 api_server.go:72] duration metric: took 14.940636388s to wait for apiserver process to appear ...
	I1202 16:16:16.914860  584973 api_server.go:88] waiting for apiserver healthz status ...
	I1202 16:16:16.914879  584973 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1202 16:16:16.919290  584973 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1202 16:16:16.920297  584973 api_server.go:141] control plane version: v1.35.0-beta.0
	I1202 16:16:16.920326  584973 api_server.go:131] duration metric: took 5.457583ms to wait for apiserver health ...
	I1202 16:16:16.920338  584973 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 16:16:16.923886  584973 system_pods.go:59] 8 kube-system pods found
	I1202 16:16:16.923938  584973 system_pods.go:61] "coredns-7d764666f9-fxl4s" [7716bc36-76db-41a6-8acc-0025ea0b7787] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 16:16:16.923953  584973 system_pods.go:61] "etcd-no-preload-534842" [4156cb3a-c013-41ad-8c3e-4b32efbd243f] Running
	I1202 16:16:16.923965  584973 system_pods.go:61] "kindnet-fn84j" [e8f80ec9-4aff-4de8-aa5a-e262160e51d7] Running
	I1202 16:16:16.923971  584973 system_pods.go:61] "kube-apiserver-no-preload-534842" [d97ac382-11e9-48cd-8672-72fa466fd1d5] Running
	I1202 16:16:16.923977  584973 system_pods.go:61] "kube-controller-manager-no-preload-534842" [f7d15fa4-0c27-4ca5-a37b-f78a693fa541] Running
	I1202 16:16:16.923988  584973 system_pods.go:61] "kube-proxy-xqnrx" [d56d7371-0677-4746-972b-b3d24b8070f2] Running
	I1202 16:16:16.923997  584973 system_pods.go:61] "kube-scheduler-no-preload-534842" [73c10cb6-aaa7-4128-896f-16def63b750d] Running
	I1202 16:16:16.924005  584973 system_pods.go:61] "storage-provisioner" [15ec190a-3c61-47f3-87a1-c5ab08d312b1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 16:16:16.924029  584973 system_pods.go:74] duration metric: took 3.682859ms to wait for pod list to return data ...
	I1202 16:16:16.924038  584973 default_sa.go:34] waiting for default service account to be created ...
	I1202 16:16:16.926663  584973 default_sa.go:45] found service account: "default"
	I1202 16:16:16.926690  584973 default_sa.go:55] duration metric: took 2.641195ms for default service account to be created ...
	I1202 16:16:16.926698  584973 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 16:16:16.929133  584973 system_pods.go:86] 8 kube-system pods found
	I1202 16:16:16.929157  584973 system_pods.go:89] "coredns-7d764666f9-fxl4s" [7716bc36-76db-41a6-8acc-0025ea0b7787] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 16:16:16.929163  584973 system_pods.go:89] "etcd-no-preload-534842" [4156cb3a-c013-41ad-8c3e-4b32efbd243f] Running
	I1202 16:16:16.929168  584973 system_pods.go:89] "kindnet-fn84j" [e8f80ec9-4aff-4de8-aa5a-e262160e51d7] Running
	I1202 16:16:16.929172  584973 system_pods.go:89] "kube-apiserver-no-preload-534842" [d97ac382-11e9-48cd-8672-72fa466fd1d5] Running
	I1202 16:16:16.929180  584973 system_pods.go:89] "kube-controller-manager-no-preload-534842" [f7d15fa4-0c27-4ca5-a37b-f78a693fa541] Running
	I1202 16:16:16.929184  584973 system_pods.go:89] "kube-proxy-xqnrx" [d56d7371-0677-4746-972b-b3d24b8070f2] Running
	I1202 16:16:16.929188  584973 system_pods.go:89] "kube-scheduler-no-preload-534842" [73c10cb6-aaa7-4128-896f-16def63b750d] Running
	I1202 16:16:16.929195  584973 system_pods.go:89] "storage-provisioner" [15ec190a-3c61-47f3-87a1-c5ab08d312b1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 16:16:16.929236  584973 retry.go:31] will retry after 220.12248ms: missing components: kube-dns
	I1202 16:16:17.154142  584973 system_pods.go:86] 8 kube-system pods found
	I1202 16:16:17.154179  584973 system_pods.go:89] "coredns-7d764666f9-fxl4s" [7716bc36-76db-41a6-8acc-0025ea0b7787] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 16:16:17.154187  584973 system_pods.go:89] "etcd-no-preload-534842" [4156cb3a-c013-41ad-8c3e-4b32efbd243f] Running
	I1202 16:16:17.154194  584973 system_pods.go:89] "kindnet-fn84j" [e8f80ec9-4aff-4de8-aa5a-e262160e51d7] Running
	I1202 16:16:17.154199  584973 system_pods.go:89] "kube-apiserver-no-preload-534842" [d97ac382-11e9-48cd-8672-72fa466fd1d5] Running
	I1202 16:16:17.154204  584973 system_pods.go:89] "kube-controller-manager-no-preload-534842" [f7d15fa4-0c27-4ca5-a37b-f78a693fa541] Running
	I1202 16:16:17.154210  584973 system_pods.go:89] "kube-proxy-xqnrx" [d56d7371-0677-4746-972b-b3d24b8070f2] Running
	I1202 16:16:17.154215  584973 system_pods.go:89] "kube-scheduler-no-preload-534842" [73c10cb6-aaa7-4128-896f-16def63b750d] Running
	I1202 16:16:17.154222  584973 system_pods.go:89] "storage-provisioner" [15ec190a-3c61-47f3-87a1-c5ab08d312b1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 16:16:17.154242  584973 retry.go:31] will retry after 331.235999ms: missing components: kube-dns
	I1202 16:16:14.428219  601673 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1202 16:16:14.428485  601673 start.go:159] libmachine.API.Create for "default-k8s-diff-port-806420" (driver="docker")
	I1202 16:16:14.428511  601673 client.go:173] LocalClient.Create starting
	I1202 16:16:14.428579  601673 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem
	I1202 16:16:14.428613  601673 main.go:143] libmachine: Decoding PEM data...
	I1202 16:16:14.428629  601673 main.go:143] libmachine: Parsing certificate...
	I1202 16:16:14.428692  601673 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem
	I1202 16:16:14.428714  601673 main.go:143] libmachine: Decoding PEM data...
	I1202 16:16:14.428726  601673 main.go:143] libmachine: Parsing certificate...
	I1202 16:16:14.429043  601673 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-806420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1202 16:16:14.448324  601673 cli_runner.go:211] docker network inspect default-k8s-diff-port-806420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1202 16:16:14.448398  601673 network_create.go:284] running [docker network inspect default-k8s-diff-port-806420] to gather additional debugging logs...
	I1202 16:16:14.448414  601673 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-806420
	W1202 16:16:14.467616  601673 cli_runner.go:211] docker network inspect default-k8s-diff-port-806420 returned with exit code 1
	I1202 16:16:14.467644  601673 network_create.go:287] error running [docker network inspect default-k8s-diff-port-806420]: docker network inspect default-k8s-diff-port-806420: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-806420 not found
	I1202 16:16:14.467657  601673 network_create.go:289] output of [docker network inspect default-k8s-diff-port-806420]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-806420 not found
	
	** /stderr **
	I1202 16:16:14.467747  601673 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 16:16:14.486569  601673 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-59c4d474daec IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:20:cf:7a:79:c5} reservation:<nil>}
	I1202 16:16:14.487151  601673 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-208582b1a4af IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:26:5b:fe:2d:46:75} reservation:<nil>}
	I1202 16:16:14.487755  601673 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-230a00bd70ce IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:fe:8f:10:7f:8e:d3} reservation:<nil>}
	I1202 16:16:14.488333  601673 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-f242ea03e26e IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:3a:4d:9d:95:a5:56} reservation:<nil>}
	I1202 16:16:14.489070  601673 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cef390}
	I1202 16:16:14.489096  601673 network_create.go:124] attempt to create docker network default-k8s-diff-port-806420 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1202 16:16:14.489142  601673 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-806420 default-k8s-diff-port-806420
	I1202 16:16:14.543099  601673 network_create.go:108] docker network default-k8s-diff-port-806420 192.168.85.0/24 created
	I1202 16:16:14.543147  601673 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-806420" container
	I1202 16:16:14.543223  601673 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1202 16:16:14.562850  601673 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-806420 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-806420 --label created_by.minikube.sigs.k8s.io=true
	I1202 16:16:14.583757  601673 oci.go:103] Successfully created a docker volume default-k8s-diff-port-806420
	I1202 16:16:14.583833  601673 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-806420-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-806420 --entrypoint /usr/bin/test -v default-k8s-diff-port-806420:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1202 16:16:15.002273  601673 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-806420
	I1202 16:16:15.002360  601673 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 16:16:15.002376  601673 kic.go:194] Starting extracting preloaded images to volume ...
	I1202 16:16:15.002481  601673 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22021-264555/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-806420:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	I1202 16:16:19.095938  601673 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22021-264555/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-806420:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (4.093398371s)
	I1202 16:16:19.095980  601673 kic.go:203] duration metric: took 4.093597472s to extract preloaded images to volume ...
	W1202 16:16:19.096082  601673 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1202 16:16:19.096127  601673 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1202 16:16:19.096179  601673 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1202 16:16:19.161087  601673 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-806420 --name default-k8s-diff-port-806420 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-806420 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-806420 --network default-k8s-diff-port-806420 --ip 192.168.85.2 --volume default-k8s-diff-port-806420:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1202 16:16:17.490138  584973 system_pods.go:86] 8 kube-system pods found
	I1202 16:16:17.490177  584973 system_pods.go:89] "coredns-7d764666f9-fxl4s" [7716bc36-76db-41a6-8acc-0025ea0b7787] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 16:16:17.490185  584973 system_pods.go:89] "etcd-no-preload-534842" [4156cb3a-c013-41ad-8c3e-4b32efbd243f] Running
	I1202 16:16:17.490193  584973 system_pods.go:89] "kindnet-fn84j" [e8f80ec9-4aff-4de8-aa5a-e262160e51d7] Running
	I1202 16:16:17.490198  584973 system_pods.go:89] "kube-apiserver-no-preload-534842" [d97ac382-11e9-48cd-8672-72fa466fd1d5] Running
	I1202 16:16:17.490204  584973 system_pods.go:89] "kube-controller-manager-no-preload-534842" [f7d15fa4-0c27-4ca5-a37b-f78a693fa541] Running
	I1202 16:16:17.490210  584973 system_pods.go:89] "kube-proxy-xqnrx" [d56d7371-0677-4746-972b-b3d24b8070f2] Running
	I1202 16:16:17.490214  584973 system_pods.go:89] "kube-scheduler-no-preload-534842" [73c10cb6-aaa7-4128-896f-16def63b750d] Running
	I1202 16:16:17.490226  584973 system_pods.go:89] "storage-provisioner" [15ec190a-3c61-47f3-87a1-c5ab08d312b1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 16:16:17.490255  584973 retry.go:31] will retry after 427.721515ms: missing components: kube-dns
	I1202 16:16:17.923513  584973 system_pods.go:86] 8 kube-system pods found
	I1202 16:16:17.923554  584973 system_pods.go:89] "coredns-7d764666f9-fxl4s" [7716bc36-76db-41a6-8acc-0025ea0b7787] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 16:16:17.923564  584973 system_pods.go:89] "etcd-no-preload-534842" [4156cb3a-c013-41ad-8c3e-4b32efbd243f] Running
	I1202 16:16:17.923572  584973 system_pods.go:89] "kindnet-fn84j" [e8f80ec9-4aff-4de8-aa5a-e262160e51d7] Running
	I1202 16:16:17.923578  584973 system_pods.go:89] "kube-apiserver-no-preload-534842" [d97ac382-11e9-48cd-8672-72fa466fd1d5] Running
	I1202 16:16:17.923624  584973 system_pods.go:89] "kube-controller-manager-no-preload-534842" [f7d15fa4-0c27-4ca5-a37b-f78a693fa541] Running
	I1202 16:16:17.923635  584973 system_pods.go:89] "kube-proxy-xqnrx" [d56d7371-0677-4746-972b-b3d24b8070f2] Running
	I1202 16:16:17.923641  584973 system_pods.go:89] "kube-scheduler-no-preload-534842" [73c10cb6-aaa7-4128-896f-16def63b750d] Running
	I1202 16:16:17.923646  584973 system_pods.go:89] "storage-provisioner" [15ec190a-3c61-47f3-87a1-c5ab08d312b1] Running
	I1202 16:16:17.923655  584973 system_pods.go:126] duration metric: took 996.950812ms to wait for k8s-apps to be running ...
	I1202 16:16:17.923668  584973 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 16:16:17.923722  584973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 16:16:17.938289  584973 system_svc.go:56] duration metric: took 14.613001ms WaitForService to wait for kubelet
	I1202 16:16:17.938320  584973 kubeadm.go:587] duration metric: took 15.964129537s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 16:16:17.938344  584973 node_conditions.go:102] verifying NodePressure condition ...
	I1202 16:16:18.075882  584973 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 16:16:18.075919  584973 node_conditions.go:123] node cpu capacity is 8
	I1202 16:16:18.075940  584973 node_conditions.go:105] duration metric: took 137.589775ms to run NodePressure ...
	I1202 16:16:18.075956  584973 start.go:242] waiting for startup goroutines ...
	I1202 16:16:18.075966  584973 start.go:247] waiting for cluster config update ...
	I1202 16:16:18.076026  584973 start.go:256] writing updated cluster config ...
	I1202 16:16:18.076876  584973 ssh_runner.go:195] Run: rm -f paused
	I1202 16:16:18.082417  584973 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 16:16:18.124890  584973 pod_ready.go:83] waiting for pod "coredns-7d764666f9-fxl4s" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:16:18.170125  584973 pod_ready.go:94] pod "coredns-7d764666f9-fxl4s" is "Ready"
	I1202 16:16:18.170160  584973 pod_ready.go:86] duration metric: took 45.235203ms for pod "coredns-7d764666f9-fxl4s" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:16:18.172726  584973 pod_ready.go:83] waiting for pod "etcd-no-preload-534842" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:16:18.219736  584973 pod_ready.go:94] pod "etcd-no-preload-534842" is "Ready"
	I1202 16:16:18.219762  584973 pod_ready.go:86] duration metric: took 47.010393ms for pod "etcd-no-preload-534842" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:16:18.273996  584973 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-534842" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:16:18.278837  584973 pod_ready.go:94] pod "kube-apiserver-no-preload-534842" is "Ready"
	I1202 16:16:18.278865  584973 pod_ready.go:86] duration metric: took 4.833809ms for pod "kube-apiserver-no-preload-534842" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:16:18.281286  584973 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-534842" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:16:18.486813  584973 pod_ready.go:94] pod "kube-controller-manager-no-preload-534842" is "Ready"
	I1202 16:16:18.486842  584973 pod_ready.go:86] duration metric: took 205.53029ms for pod "kube-controller-manager-no-preload-534842" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:16:18.687204  584973 pod_ready.go:83] waiting for pod "kube-proxy-xqnrx" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:16:19.087513  584973 pod_ready.go:94] pod "kube-proxy-xqnrx" is "Ready"
	I1202 16:16:19.087542  584973 pod_ready.go:86] duration metric: took 400.310852ms for pod "kube-proxy-xqnrx" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:16:19.287892  584973 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-534842" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:16:19.687211  584973 pod_ready.go:94] pod "kube-scheduler-no-preload-534842" is "Ready"
	I1202 16:16:19.687244  584973 pod_ready.go:86] duration metric: took 399.325419ms for pod "kube-scheduler-no-preload-534842" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:16:19.687261  584973 pod_ready.go:40] duration metric: took 1.604793693s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 16:16:19.754680  584973 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1202 16:16:19.760999  584973 out.go:179] * Done! kubectl is now configured to use "no-preload-534842" cluster and "default" namespace by default
	I1202 16:16:16.586965  595674 out.go:252]   - Booting up control plane ...
	I1202 16:16:16.587098  595674 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 16:16:16.587227  595674 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 16:16:16.587918  595674 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 16:16:16.604237  595674 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 16:16:16.604343  595674 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 16:16:16.612211  595674 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 16:16:16.612373  595674 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 16:16:16.612465  595674 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 16:16:16.728880  595674 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 16:16:16.729061  595674 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 16:16:18.229715  595674 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.500978888s
	I1202 16:16:18.232625  595674 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1202 16:16:18.232774  595674 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1202 16:16:18.232903  595674 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1202 16:16:18.233042  595674 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1202 16:16:20.394553  595674 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.161747655s
	I1202 16:16:19.487764  601673 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-806420 --format={{.State.Running}}
	I1202 16:16:19.507812  601673 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-806420 --format={{.State.Status}}
	I1202 16:16:19.526627  601673 cli_runner.go:164] Run: docker exec default-k8s-diff-port-806420 stat /var/lib/dpkg/alternatives/iptables
	I1202 16:16:19.579339  601673 oci.go:144] the created container "default-k8s-diff-port-806420" has a running status.
	I1202 16:16:19.579384  601673 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22021-264555/.minikube/machines/default-k8s-diff-port-806420/id_rsa...
	I1202 16:16:19.907730  601673 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22021-264555/.minikube/machines/default-k8s-diff-port-806420/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1202 16:16:19.952144  601673 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-806420 --format={{.State.Status}}
	I1202 16:16:19.984788  601673 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1202 16:16:19.984812  601673 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-806420 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1202 16:16:20.048107  601673 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-806420 --format={{.State.Status}}
	I1202 16:16:20.071379  601673 machine.go:94] provisionDockerMachine start ...
	I1202 16:16:20.071495  601673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:16:20.095016  601673 main.go:143] libmachine: Using SSH client type: native
	I1202 16:16:20.095324  601673 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33235 <nil> <nil>}
	I1202 16:16:20.095341  601673 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 16:16:20.255814  601673 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-806420
	
	I1202 16:16:20.255849  601673 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-806420"
	I1202 16:16:20.255927  601673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:16:20.282777  601673 main.go:143] libmachine: Using SSH client type: native
	I1202 16:16:20.283084  601673 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33235 <nil> <nil>}
	I1202 16:16:20.283105  601673 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-806420 && echo "default-k8s-diff-port-806420" | sudo tee /etc/hostname
	I1202 16:16:20.459824  601673 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-806420
	
	I1202 16:16:20.459914  601673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:16:20.482335  601673 main.go:143] libmachine: Using SSH client type: native
	I1202 16:16:20.482652  601673 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33235 <nil> <nil>}
	I1202 16:16:20.482695  601673 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-806420' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-806420/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-806420' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 16:16:20.638652  601673 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 16:16:20.638707  601673 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-264555/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-264555/.minikube}
	I1202 16:16:20.638765  601673 ubuntu.go:190] setting up certificates
	I1202 16:16:20.638781  601673 provision.go:84] configureAuth start
	I1202 16:16:20.638853  601673 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-806420
	I1202 16:16:20.667638  601673 provision.go:143] copyHostCerts
	I1202 16:16:20.667714  601673 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem, removing ...
	I1202 16:16:20.667729  601673 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem
	I1202 16:16:20.667817  601673 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem (1082 bytes)
	I1202 16:16:20.667936  601673 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem, removing ...
	I1202 16:16:20.667946  601673 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem
	I1202 16:16:20.667986  601673 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem (1123 bytes)
	I1202 16:16:20.668060  601673 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem, removing ...
	I1202 16:16:20.668070  601673 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem
	I1202 16:16:20.668102  601673 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem (1675 bytes)
	I1202 16:16:20.668166  601673 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-806420 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-806420 localhost minikube]
	I1202 16:16:20.810388  601673 provision.go:177] copyRemoteCerts
	I1202 16:16:20.810477  601673 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 16:16:20.810528  601673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:16:20.835349  601673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33235 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/default-k8s-diff-port-806420/id_rsa Username:docker}
	I1202 16:16:20.937821  601673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 16:16:20.959307  601673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1202 16:16:20.979141  601673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 16:16:20.998367  601673 provision.go:87] duration metric: took 359.569884ms to configureAuth
	I1202 16:16:20.998404  601673 ubuntu.go:206] setting minikube options for container-runtime
	I1202 16:16:20.998585  601673 config.go:182] Loaded profile config "default-k8s-diff-port-806420": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 16:16:20.998684  601673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:16:21.018742  601673 main.go:143] libmachine: Using SSH client type: native
	I1202 16:16:21.019019  601673 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33235 <nil> <nil>}
	I1202 16:16:21.019053  601673 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 16:16:21.386738  601673 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 16:16:21.386768  601673 machine.go:97] duration metric: took 1.31536715s to provisionDockerMachine
	I1202 16:16:21.386782  601673 client.go:176] duration metric: took 6.95826395s to LocalClient.Create
	I1202 16:16:21.386801  601673 start.go:167] duration metric: took 6.958316676s to libmachine.API.Create "default-k8s-diff-port-806420"
	I1202 16:16:21.386816  601673 start.go:293] postStartSetup for "default-k8s-diff-port-806420" (driver="docker")
	I1202 16:16:21.386829  601673 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 16:16:21.386889  601673 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 16:16:21.386936  601673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:16:21.414690  601673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33235 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/default-k8s-diff-port-806420/id_rsa Username:docker}
	I1202 16:16:21.532871  601673 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 16:16:21.537292  601673 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 16:16:21.537324  601673 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 16:16:21.537336  601673 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-264555/.minikube/addons for local assets ...
	I1202 16:16:21.537394  601673 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-264555/.minikube/files for local assets ...
	I1202 16:16:21.537523  601673 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem -> 2680992.pem in /etc/ssl/certs
	I1202 16:16:21.537710  601673 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 16:16:21.547082  601673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem --> /etc/ssl/certs/2680992.pem (1708 bytes)
	I1202 16:16:21.572748  601673 start.go:296] duration metric: took 185.91374ms for postStartSetup
	I1202 16:16:21.573155  601673 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-806420
	I1202 16:16:21.593804  601673 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/config.json ...
	I1202 16:16:21.594132  601673 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 16:16:21.594214  601673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:16:21.615641  601673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33235 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/default-k8s-diff-port-806420/id_rsa Username:docker}
	I1202 16:16:21.719502  601673 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 16:16:21.724085  601673 start.go:128] duration metric: took 7.297809595s to createHost
	I1202 16:16:21.724110  601673 start.go:83] releasing machines lock for "default-k8s-diff-port-806420", held for 7.297967707s
	I1202 16:16:21.724179  601673 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-806420
	I1202 16:16:21.743972  601673 ssh_runner.go:195] Run: cat /version.json
	I1202 16:16:21.744047  601673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:16:21.744069  601673 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 16:16:21.744153  601673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:16:21.768304  601673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33235 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/default-k8s-diff-port-806420/id_rsa Username:docker}
	I1202 16:16:21.768653  601673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33235 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/default-k8s-diff-port-806420/id_rsa Username:docker}
	I1202 16:16:21.944956  601673 ssh_runner.go:195] Run: systemctl --version
	I1202 16:16:21.951948  601673 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 16:16:21.990786  601673 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 16:16:21.996530  601673 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 16:16:21.996598  601673 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 16:16:22.029285  601673 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1202 16:16:22.029311  601673 start.go:496] detecting cgroup driver to use...
	I1202 16:16:22.029349  601673 detect.go:190] detected "systemd" cgroup driver on host os
	I1202 16:16:22.029490  601673 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 16:16:22.050169  601673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 16:16:22.064074  601673 docker.go:218] disabling cri-docker service (if available) ...
	I1202 16:16:22.064131  601673 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 16:16:22.083320  601673 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 16:16:22.102248  601673 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 16:16:22.202201  601673 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 16:16:22.331147  601673 docker.go:234] disabling docker service ...
	I1202 16:16:22.331242  601673 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 16:16:22.357982  601673 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 16:16:22.376709  601673 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 16:16:22.500502  601673 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 16:16:22.613942  601673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 16:16:22.630399  601673 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 16:16:22.649220  601673 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 16:16:22.649304  601673 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:16:22.663772  601673 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1202 16:16:22.663852  601673 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:16:22.675579  601673 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:16:22.687621  601673 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:16:22.698994  601673 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 16:16:22.710351  601673 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:16:22.722462  601673 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:16:22.741842  601673 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:16:22.753824  601673 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 16:16:22.764112  601673 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 16:16:22.773885  601673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 16:16:22.879806  601673 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 16:16:23.041934  601673 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 16:16:23.042017  601673 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 16:16:23.046383  601673 start.go:564] Will wait 60s for crictl version
	I1202 16:16:23.046475  601673 ssh_runner.go:195] Run: which crictl
	I1202 16:16:23.050820  601673 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 16:16:23.075860  601673 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 16:16:23.075951  601673 ssh_runner.go:195] Run: crio --version
	I1202 16:16:23.108218  601673 ssh_runner.go:195] Run: crio --version
	I1202 16:16:23.144296  601673 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 16:16:21.341062  595674 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.108389447s
	I1202 16:16:23.234818  595674 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.001827796s
	I1202 16:16:23.253760  595674 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1202 16:16:23.266138  595674 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1202 16:16:23.281343  595674 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1202 16:16:23.281705  595674 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-046271 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1202 16:16:23.293002  595674 kubeadm.go:319] [bootstrap-token] Using token: 462rtt.jh9pi9ht29u5ggsc
	I1202 16:16:23.145404  601673 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-806420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 16:16:23.164835  601673 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1202 16:16:23.169245  601673 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 16:16:23.181063  601673 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-806420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-806420 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 16:16:23.181178  601673 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 16:16:23.181218  601673 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 16:16:23.218739  601673 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 16:16:23.218761  601673 crio.go:433] Images already preloaded, skipping extraction
	I1202 16:16:23.218806  601673 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 16:16:23.248503  601673 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 16:16:23.248529  601673 cache_images.go:86] Images are preloaded, skipping loading
	I1202 16:16:23.248538  601673 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.2 crio true true} ...
	I1202 16:16:23.248648  601673 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-806420 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-806420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 16:16:23.248728  601673 ssh_runner.go:195] Run: crio config
	I1202 16:16:23.313739  601673 cni.go:84] Creating CNI manager for ""
	I1202 16:16:23.313772  601673 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 16:16:23.313800  601673 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 16:16:23.313831  601673 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-806420 NodeName:default-k8s-diff-port-806420 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 16:16:23.314018  601673 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-806420"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 16:16:23.314125  601673 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 16:16:23.325301  601673 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 16:16:23.325375  601673 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 16:16:23.335792  601673 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1202 16:16:23.353545  601673 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 16:16:23.373772  601673 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1202 16:16:23.391087  601673 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1202 16:16:23.396146  601673 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 16:16:23.413313  601673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 16:16:23.517250  601673 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 16:16:23.545245  601673 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420 for IP: 192.168.85.2
	I1202 16:16:23.545271  601673 certs.go:195] generating shared ca certs ...
	I1202 16:16:23.545292  601673 certs.go:227] acquiring lock for ca certs: {Name:mk039ff27816ff98157f54038cc23b17e408fc34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:16:23.545461  601673 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-264555/.minikube/ca.key
	I1202 16:16:23.545517  601673 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.key
	I1202 16:16:23.545531  601673 certs.go:257] generating profile certs ...
	I1202 16:16:23.545597  601673 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/client.key
	I1202 16:16:23.545616  601673 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/client.crt with IP's: []
	I1202 16:16:23.655564  601673 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/client.crt ...
	I1202 16:16:23.655596  601673 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/client.crt: {Name:mk044aee45864ae3011f87f54cf1c00e1af631cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:16:23.655842  601673 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/client.key ...
	I1202 16:16:23.655863  601673 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/client.key: {Name:mk003c03b547eb4bf5924bf96c0fdb38448e2e07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:16:23.655996  601673 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/apiserver.key.20cb4091
	I1202 16:16:23.656020  601673 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/apiserver.crt.20cb4091 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1202 16:16:23.717996  601673 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/apiserver.crt.20cb4091 ...
	I1202 16:16:23.718024  601673 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/apiserver.crt.20cb4091: {Name:mk7ca2709c3dc8ec5003cb5d799a6f476acd7cf6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:16:23.718186  601673 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/apiserver.key.20cb4091 ...
	I1202 16:16:23.718199  601673 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/apiserver.key.20cb4091: {Name:mkd07b79f299ae30f7e6e53a93abf6f957f88371 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:16:23.718267  601673 certs.go:382] copying /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/apiserver.crt.20cb4091 -> /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/apiserver.crt
	I1202 16:16:23.718340  601673 certs.go:386] copying /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/apiserver.key.20cb4091 -> /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/apiserver.key
	I1202 16:16:23.718399  601673 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/proxy-client.key
	I1202 16:16:23.718415  601673 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/proxy-client.crt with IP's: []
	I1202 16:16:23.807661  601673 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/proxy-client.crt ...
	I1202 16:16:23.807702  601673 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/proxy-client.crt: {Name:mkf7133b667f8a9aff2a1644fce13888a130a6ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:16:23.807929  601673 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/proxy-client.key ...
	I1202 16:16:23.807955  601673 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/proxy-client.key: {Name:mkaefdc5d62a6bd2ef7a353026035049d26fef63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:16:23.808210  601673 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099.pem (1338 bytes)
	W1202 16:16:23.808270  601673 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099_empty.pem, impossibly tiny 0 bytes
	I1202 16:16:23.808285  601673 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem (1679 bytes)
	I1202 16:16:23.808319  601673 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem (1082 bytes)
	I1202 16:16:23.808355  601673 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem (1123 bytes)
	I1202 16:16:23.808389  601673 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem (1675 bytes)
	I1202 16:16:23.808491  601673 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem (1708 bytes)
	I1202 16:16:23.809289  601673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 16:16:23.829907  601673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 16:16:23.851131  601673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 16:16:23.884842  601673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 16:16:23.915731  601673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1202 16:16:23.945146  601673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 16:16:23.966850  601673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 16:16:23.985129  601673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 16:16:24.003364  601673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099.pem --> /usr/share/ca-certificates/268099.pem (1338 bytes)
	I1202 16:16:24.023228  601673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem --> /usr/share/ca-certificates/2680992.pem (1708 bytes)
	I1202 16:16:24.044676  601673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 16:16:24.067734  601673 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 16:16:24.081768  601673 ssh_runner.go:195] Run: openssl version
	I1202 16:16:24.088030  601673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 16:16:24.096397  601673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:16:24.100445  601673 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 15:16 /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:16:24.100508  601673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:16:24.136028  601673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 16:16:24.145033  601673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/268099.pem && ln -fs /usr/share/ca-certificates/268099.pem /etc/ssl/certs/268099.pem"
	I1202 16:16:24.154073  601673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/268099.pem
	I1202 16:16:24.157992  601673 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 15:33 /usr/share/ca-certificates/268099.pem
	I1202 16:16:24.158049  601673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/268099.pem
	I1202 16:16:24.194719  601673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/268099.pem /etc/ssl/certs/51391683.0"
	I1202 16:16:24.204057  601673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2680992.pem && ln -fs /usr/share/ca-certificates/2680992.pem /etc/ssl/certs/2680992.pem"
	I1202 16:16:24.213249  601673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2680992.pem
	I1202 16:16:24.217433  601673 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 15:33 /usr/share/ca-certificates/2680992.pem
	I1202 16:16:24.217490  601673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2680992.pem
	I1202 16:16:24.255788  601673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2680992.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 16:16:24.265089  601673 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 16:16:24.269380  601673 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 16:16:24.269456  601673 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-806420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-806420 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 16:16:24.269552  601673 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 16:16:24.269608  601673 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 16:16:24.303729  601673 cri.go:89] found id: ""
	I1202 16:16:24.303809  601673 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 16:16:24.312374  601673 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 16:16:24.321072  601673 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 16:16:24.321130  601673 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 16:16:24.329519  601673 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 16:16:24.329540  601673 kubeadm.go:158] found existing configuration files:
	
	I1202 16:16:24.329592  601673 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1202 16:16:24.338511  601673 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 16:16:24.338685  601673 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 16:16:24.346513  601673 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1202 16:16:24.354933  601673 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 16:16:24.354991  601673 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 16:16:24.362999  601673 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1202 16:16:24.372360  601673 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 16:16:24.372495  601673 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 16:16:24.380512  601673 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1202 16:16:24.388591  601673 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 16:16:24.388641  601673 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 16:16:24.396417  601673 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 16:16:24.442304  601673 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1202 16:16:24.442404  601673 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 16:16:24.465203  601673 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 16:16:24.465291  601673 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1202 16:16:24.465342  601673 kubeadm.go:319] OS: Linux
	I1202 16:16:24.465405  601673 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 16:16:24.465512  601673 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 16:16:24.465607  601673 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 16:16:24.465695  601673 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 16:16:24.465773  601673 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 16:16:24.465844  601673 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 16:16:24.465938  601673 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 16:16:24.465997  601673 kubeadm.go:319] CGROUPS_IO: enabled
	I1202 16:16:24.528873  601673 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 16:16:24.529065  601673 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 16:16:24.529180  601673 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 16:16:24.536338  601673 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 16:16:23.294551  595674 out.go:252]   - Configuring RBAC rules ...
	I1202 16:16:23.294752  595674 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1202 16:16:23.302551  595674 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1202 16:16:23.314269  595674 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1202 16:16:23.317497  595674 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1202 16:16:23.320408  595674 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1202 16:16:23.323536  595674 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1202 16:16:23.641836  595674 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1202 16:16:24.060898  595674 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1202 16:16:24.641811  595674 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1202 16:16:24.642742  595674 kubeadm.go:319] 
	I1202 16:16:24.642828  595674 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1202 16:16:24.642840  595674 kubeadm.go:319] 
	I1202 16:16:24.642935  595674 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1202 16:16:24.642966  595674 kubeadm.go:319] 
	I1202 16:16:24.643007  595674 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1202 16:16:24.643100  595674 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1202 16:16:24.643175  595674 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1202 16:16:24.643185  595674 kubeadm.go:319] 
	I1202 16:16:24.643255  595674 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1202 16:16:24.643263  595674 kubeadm.go:319] 
	I1202 16:16:24.643317  595674 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1202 16:16:24.643326  595674 kubeadm.go:319] 
	I1202 16:16:24.643389  595674 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1202 16:16:24.643518  595674 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1202 16:16:24.643625  595674 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1202 16:16:24.643635  595674 kubeadm.go:319] 
	I1202 16:16:24.643764  595674 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1202 16:16:24.643936  595674 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1202 16:16:24.643960  595674 kubeadm.go:319] 
	I1202 16:16:24.644087  595674 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 462rtt.jh9pi9ht29u5ggsc \
	I1202 16:16:24.644238  595674 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a700026e2fe1634919809d9050f2aa4b3e0ccbee543d4881e1cd695d56e7eef6 \
	I1202 16:16:24.644274  595674 kubeadm.go:319] 	--control-plane 
	I1202 16:16:24.644283  595674 kubeadm.go:319] 
	I1202 16:16:24.644401  595674 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1202 16:16:24.644410  595674 kubeadm.go:319] 
	I1202 16:16:24.644536  595674 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 462rtt.jh9pi9ht29u5ggsc \
	I1202 16:16:24.644670  595674 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a700026e2fe1634919809d9050f2aa4b3e0ccbee543d4881e1cd695d56e7eef6 
	I1202 16:16:24.647307  595674 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1202 16:16:24.647406  595674 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 16:16:24.647434  595674 cni.go:84] Creating CNI manager for ""
	I1202 16:16:24.647445  595674 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 16:16:24.649327  595674 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1202 16:16:24.650552  595674 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1202 16:16:24.655494  595674 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1202 16:16:24.655515  595674 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1202 16:16:24.670356  595674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1202 16:16:24.915825  595674 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1202 16:16:24.915908  595674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 16:16:24.916029  595674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-046271 minikube.k8s.io/updated_at=2025_12_02T16_16_24_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689 minikube.k8s.io/name=embed-certs-046271 minikube.k8s.io/primary=true
	I1202 16:16:24.927469  595674 ops.go:34] apiserver oom_adj: -16
	I1202 16:16:25.035152  595674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 16:16:25.535303  595674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 16:16:26.035911  595674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 16:16:24.538062  601673 out.go:252]   - Generating certificates and keys ...
	I1202 16:16:24.538165  601673 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 16:16:24.538261  601673 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 16:16:25.191618  601673 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1202 16:16:25.398176  601673 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1202 16:16:25.659651  601673 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1202 16:16:25.815856  601673 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1202 16:16:26.009347  601673 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1202 16:16:26.009530  601673 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-806420 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1202 16:16:26.176486  601673 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1202 16:16:26.176657  601673 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-806420 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1202 16:16:26.267357  601673 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1202 16:16:26.687891  601673 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1202 16:16:26.755133  601673 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1202 16:16:26.755260  601673 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 16:16:26.885657  601673 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 16:16:27.092856  601673 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 16:16:27.447007  601673 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 16:16:27.732194  601673 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 16:16:28.126039  601673 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 16:16:28.126580  601673 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 16:16:28.131454  601673 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 16:16:26.535245  595674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 16:16:27.036144  595674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 16:16:27.535699  595674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 16:16:28.036254  595674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 16:16:28.535981  595674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 16:16:29.035280  595674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 16:16:29.126815  595674 kubeadm.go:1114] duration metric: took 4.210966853s to wait for elevateKubeSystemPrivileges
	I1202 16:16:29.126854  595674 kubeadm.go:403] duration metric: took 17.297134388s to StartCluster
	I1202 16:16:29.126877  595674 settings.go:142] acquiring lock: {Name:mkb00b5395affa5a80ee09f21cfed53b1afcd59c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:16:29.126952  595674 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 16:16:29.129156  595674 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/kubeconfig: {Name:mk809d3f43352510256b48d000241cc8ee13f80d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:16:29.129444  595674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1202 16:16:29.129470  595674 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 16:16:29.129555  595674 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 16:16:29.129714  595674 config.go:182] Loaded profile config "embed-certs-046271": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 16:16:29.129736  595674 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-046271"
	I1202 16:16:29.129766  595674 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-046271"
	I1202 16:16:29.129773  595674 addons.go:70] Setting default-storageclass=true in profile "embed-certs-046271"
	I1202 16:16:29.129792  595674 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-046271"
	I1202 16:16:29.129846  595674 host.go:66] Checking if "embed-certs-046271" exists ...
	I1202 16:16:29.130194  595674 cli_runner.go:164] Run: docker container inspect embed-certs-046271 --format={{.State.Status}}
	I1202 16:16:29.130369  595674 cli_runner.go:164] Run: docker container inspect embed-certs-046271 --format={{.State.Status}}
	I1202 16:16:29.132927  595674 out.go:179] * Verifying Kubernetes components...
	I1202 16:16:29.134365  595674 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 16:16:29.162375  595674 addons.go:239] Setting addon default-storageclass=true in "embed-certs-046271"
	I1202 16:16:29.162603  595674 host.go:66] Checking if "embed-certs-046271" exists ...
	I1202 16:16:29.163122  595674 cli_runner.go:164] Run: docker container inspect embed-certs-046271 --format={{.State.Status}}
	I1202 16:16:29.163611  595674 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 16:16:28.134598  601673 out.go:252]   - Booting up control plane ...
	I1202 16:16:28.134726  601673 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 16:16:28.134817  601673 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 16:16:28.134911  601673 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 16:16:28.147348  601673 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 16:16:28.147513  601673 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 16:16:28.154724  601673 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 16:16:28.155008  601673 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 16:16:28.155051  601673 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 16:16:28.274277  601673 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 16:16:28.274502  601673 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 16:16:28.776382  601673 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.779936ms
	I1202 16:16:28.780859  601673 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1202 16:16:28.780978  601673 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1202 16:16:28.781093  601673 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1202 16:16:28.781194  601673 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1202 16:16:29.165582  595674 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 16:16:29.165646  595674 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 16:16:29.165717  595674 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-046271
	I1202 16:16:29.210380  595674 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 16:16:29.210405  595674 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 16:16:29.211600  595674 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-046271
	I1202 16:16:29.213086  595674 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33229 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/embed-certs-046271/id_rsa Username:docker}
	I1202 16:16:29.252865  595674 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33229 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/embed-certs-046271/id_rsa Username:docker}
	I1202 16:16:29.282447  595674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1202 16:16:29.347478  595674 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 16:16:29.354600  595674 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 16:16:29.402785  595674 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 16:16:29.631224  595674 node_ready.go:35] waiting up to 6m0s for node "embed-certs-046271" to be "Ready" ...
	I1202 16:16:29.631454  595674 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1202 16:16:29.927113  595674 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	
	
	==> CRI-O <==
	Dec 02 16:16:16 no-preload-534842 crio[775]: time="2025-12-02T16:16:16.802369547Z" level=info msg="Starting container: bd1c7b59c6fcf3d839f33ec77e6df862d3ee856e66d36d1ba3f35ba524acc9ae" id=f342e74c-8ab6-44e6-8580-15f3fcaa3ff7 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 16:16:16 no-preload-534842 crio[775]: time="2025-12-02T16:16:16.804375324Z" level=info msg="Started container" PID=2822 containerID=bd1c7b59c6fcf3d839f33ec77e6df862d3ee856e66d36d1ba3f35ba524acc9ae description=kube-system/coredns-7d764666f9-fxl4s/coredns id=f342e74c-8ab6-44e6-8580-15f3fcaa3ff7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0e2b1a46e8d905558ca0c714864e84869a64386808e6c8e3e342389087d7f22b
	Dec 02 16:16:20 no-preload-534842 crio[775]: time="2025-12-02T16:16:20.32882172Z" level=info msg="Running pod sandbox: default/busybox/POD" id=4693e13b-187c-49ec-aa00-b4f955b448b2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 16:16:20 no-preload-534842 crio[775]: time="2025-12-02T16:16:20.32891149Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:16:20 no-preload-534842 crio[775]: time="2025-12-02T16:16:20.334895936Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:458a0de89a7cfe85917944d6178e3dfadd33729222b9457fa88dc747c4fb669b UID:8068757f-9d6b-462a-901f-ba1d7b811746 NetNS:/var/run/netns/6e551dda-32f6-44af-928a-0ab1c1a18e05 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000024b50}] Aliases:map[]}"
	Dec 02 16:16:20 no-preload-534842 crio[775]: time="2025-12-02T16:16:20.335076732Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 02 16:16:20 no-preload-534842 crio[775]: time="2025-12-02T16:16:20.354876546Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:458a0de89a7cfe85917944d6178e3dfadd33729222b9457fa88dc747c4fb669b UID:8068757f-9d6b-462a-901f-ba1d7b811746 NetNS:/var/run/netns/6e551dda-32f6-44af-928a-0ab1c1a18e05 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000024b50}] Aliases:map[]}"
	Dec 02 16:16:20 no-preload-534842 crio[775]: time="2025-12-02T16:16:20.355145046Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 02 16:16:20 no-preload-534842 crio[775]: time="2025-12-02T16:16:20.357098514Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 02 16:16:20 no-preload-534842 crio[775]: time="2025-12-02T16:16:20.359369871Z" level=info msg="Ran pod sandbox 458a0de89a7cfe85917944d6178e3dfadd33729222b9457fa88dc747c4fb669b with infra container: default/busybox/POD" id=4693e13b-187c-49ec-aa00-b4f955b448b2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 16:16:20 no-preload-534842 crio[775]: time="2025-12-02T16:16:20.36124971Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=af6690c4-19c9-419f-a435-2a599d874f5b name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:16:20 no-preload-534842 crio[775]: time="2025-12-02T16:16:20.361514221Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=af6690c4-19c9-419f-a435-2a599d874f5b name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:16:20 no-preload-534842 crio[775]: time="2025-12-02T16:16:20.361647715Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=af6690c4-19c9-419f-a435-2a599d874f5b name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:16:20 no-preload-534842 crio[775]: time="2025-12-02T16:16:20.363790323Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5ac1eee5-f669-480d-b521-05470d479102 name=/runtime.v1.ImageService/PullImage
	Dec 02 16:16:20 no-preload-534842 crio[775]: time="2025-12-02T16:16:20.365798383Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 02 16:16:22 no-preload-534842 crio[775]: time="2025-12-02T16:16:22.370392943Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=5ac1eee5-f669-480d-b521-05470d479102 name=/runtime.v1.ImageService/PullImage
	Dec 02 16:16:22 no-preload-534842 crio[775]: time="2025-12-02T16:16:22.371162413Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c3fc2106-2df6-43ed-884c-4e6d55001deb name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:16:22 no-preload-534842 crio[775]: time="2025-12-02T16:16:22.373207861Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=fd52a331-35a0-41f8-bc08-747f12e0ac2b name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:16:22 no-preload-534842 crio[775]: time="2025-12-02T16:16:22.377108871Z" level=info msg="Creating container: default/busybox/busybox" id=5b614f30-85f7-4959-b53e-099b2cdb0d97 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:16:22 no-preload-534842 crio[775]: time="2025-12-02T16:16:22.377248614Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:16:22 no-preload-534842 crio[775]: time="2025-12-02T16:16:22.382199373Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:16:22 no-preload-534842 crio[775]: time="2025-12-02T16:16:22.382795463Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:16:22 no-preload-534842 crio[775]: time="2025-12-02T16:16:22.408719043Z" level=info msg="Created container 9961781e6b866ee2454ff5bcc721a14e550cd19016ac3e6341f2a17f6f2d3ef9: default/busybox/busybox" id=5b614f30-85f7-4959-b53e-099b2cdb0d97 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:16:22 no-preload-534842 crio[775]: time="2025-12-02T16:16:22.409533003Z" level=info msg="Starting container: 9961781e6b866ee2454ff5bcc721a14e550cd19016ac3e6341f2a17f6f2d3ef9" id=0474eaba-9ae3-4d7d-9608-90601e4066f3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 16:16:22 no-preload-534842 crio[775]: time="2025-12-02T16:16:22.411689411Z" level=info msg="Started container" PID=2896 containerID=9961781e6b866ee2454ff5bcc721a14e550cd19016ac3e6341f2a17f6f2d3ef9 description=default/busybox/busybox id=0474eaba-9ae3-4d7d-9608-90601e4066f3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=458a0de89a7cfe85917944d6178e3dfadd33729222b9457fa88dc747c4fb669b
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	9961781e6b866       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   458a0de89a7cf       busybox                                     default
	bd1c7b59c6fcf       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      14 seconds ago      Running             coredns                   0                   0e2b1a46e8d90       coredns-7d764666f9-fxl4s                    kube-system
	9a8f4b3fc47d8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      14 seconds ago      Running             storage-provisioner       0                   304282e527464       storage-provisioner                         kube-system
	22a74946bc19b       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    25 seconds ago      Running             kindnet-cni               0                   f015f5f12113f       kindnet-fn84j                               kube-system
	64c697cc0dd54       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                      28 seconds ago      Running             kube-proxy                0                   4af96e2cdca96       kube-proxy-xqnrx                            kube-system
	21df59826aae1       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                      38 seconds ago      Running             kube-controller-manager   0                   d909592bbd36e       kube-controller-manager-no-preload-534842   kube-system
	53aec96f06274       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                      38 seconds ago      Running             kube-scheduler            0                   bb9973e4a11e9       kube-scheduler-no-preload-534842            kube-system
	0cd4156acca2f       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                      38 seconds ago      Running             kube-apiserver            0                   34f73db3b9f16       kube-apiserver-no-preload-534842            kube-system
	746ddfc5633a5       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      38 seconds ago      Running             etcd                      0                   d2fd84b8537d6       etcd-no-preload-534842                      kube-system
	
	
	==> coredns [bd1c7b59c6fcf3d839f33ec77e6df862d3ee856e66d36d1ba3f35ba524acc9ae] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:38770 - 61488 "HINFO IN 7917906337112530565.2759499804248900704. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.021347443s
	
	
	==> describe nodes <==
	Name:               no-preload-534842
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-534842
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=no-preload-534842
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T16_15_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 16:15:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-534842
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 16:16:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 16:16:27 +0000   Tue, 02 Dec 2025 16:15:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 16:16:27 +0000   Tue, 02 Dec 2025 16:15:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 16:16:27 +0000   Tue, 02 Dec 2025 16:15:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 16:16:27 +0000   Tue, 02 Dec 2025 16:16:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-534842
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                08e82a9a-8bf2-46c3-bfb2-1095025d0bbb
	  Boot ID:                    e00bac56-b076-4861-bc22-5d3b11269f73
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-7d764666f9-fxl4s                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     30s
	  kube-system                 etcd-no-preload-534842                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         35s
	  kube-system                 kindnet-fn84j                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-no-preload-534842             250m (3%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-no-preload-534842    200m (2%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-xqnrx                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-no-preload-534842             100m (1%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  31s   node-controller  Node no-preload-534842 event: Registered Node no-preload-534842 in Controller
	
	
	==> dmesg <==
	[  +0.000023] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[Dec 2 16:14] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ca bc 15 8e 4f 39 08 06
	[  +0.202375] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4a 25 86 21 45 76 08 06
	[  +7.441346] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 50 97 74 77 f9 08 06
	[  +0.000311] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 8c 8a 4d de f7 08 06
	[Dec 2 16:15] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 87 56 d2 46 1b 08 06
	[  +0.000909] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4a 25 86 21 45 76 08 06
	[  +7.449328] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a 06 ef 04 0a 22 08 06
	[ +17.731920] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff ae 8e 5c 48 83 60 08 06
	[  +2.165442] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0e 0b db fb 54 af 08 06
	[  +0.000320] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 3a 06 ef 04 0a 22 08 06
	[ +14.651928] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 5d 2d 15 78 ec 08 06
	[  +0.000385] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 8e 5c 48 83 60 08 06
	
	
	==> etcd [746ddfc5633a5e9d62197e238b122012dd2836c2cbffd6e02bf0567fcf047345] <==
	{"level":"info","ts":"2025-12-02T16:15:54.356636Z","caller":"traceutil/trace.go:172","msg":"trace[1493840074] transaction","detail":"{read_only:false; response_revision:35; number_of_response:1; }","duration":"122.389066ms","start":"2025-12-02T16:15:54.234239Z","end":"2025-12-02T16:15:54.356628Z","steps":["trace[1493840074] 'process raft request'  (duration: 122.246129ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-02T16:15:54.357011Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.045703ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-12-02T16:15:54.357012Z","caller":"traceutil/trace.go:172","msg":"trace[1759587294] transaction","detail":"{read_only:false; response_revision:30; number_of_response:1; }","duration":"197.563618ms","start":"2025-12-02T16:15:54.158877Z","end":"2025-12-02T16:15:54.356440Z","steps":["trace[1759587294] 'process raft request'  (duration: 197.412268ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T16:15:54.357045Z","caller":"traceutil/trace.go:172","msg":"trace[181806898] range","detail":"{range_begin:/registry/certificatesigningrequests; range_end:; response_count:0; response_revision:37; }","duration":"121.089302ms","start":"2025-12-02T16:15:54.235949Z","end":"2025-12-02T16:15:54.357038Z","steps":["trace[181806898] 'agreement among raft nodes before linearized reading'  (duration: 121.022856ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T16:15:54.356947Z","caller":"traceutil/trace.go:172","msg":"trace[404344395] transaction","detail":"{read_only:false; response_revision:36; number_of_response:1; }","duration":"122.555003ms","start":"2025-12-02T16:15:54.234382Z","end":"2025-12-02T16:15:54.356937Z","steps":["trace[404344395] 'process raft request'  (duration: 122.121282ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-02T16:15:54.541297Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"122.831281ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-534842\" limit:1 ","response":"range_response_count:1 size:3506"}
	{"level":"info","ts":"2025-12-02T16:15:54.541370Z","caller":"traceutil/trace.go:172","msg":"trace[509938794] range","detail":"{range_begin:/registry/minions/no-preload-534842; range_end:; response_count:1; response_revision:38; }","duration":"122.91256ms","start":"2025-12-02T16:15:54.418435Z","end":"2025-12-02T16:15:54.541348Z","steps":["trace[509938794] 'agreement among raft nodes before linearized reading'  (duration: 59.170811ms)","trace[509938794] 'range keys from in-memory index tree'  (duration: 63.540355ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-02T16:15:54.541390Z","caller":"traceutil/trace.go:172","msg":"trace[1726450694] transaction","detail":"{read_only:false; response_revision:41; number_of_response:1; }","duration":"122.805841ms","start":"2025-12-02T16:15:54.418576Z","end":"2025-12-02T16:15:54.541381Z","steps":["trace[1726450694] 'process raft request'  (duration: 122.718902ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T16:15:54.541451Z","caller":"traceutil/trace.go:172","msg":"trace[405495616] transaction","detail":"{read_only:false; response_revision:40; number_of_response:1; }","duration":"179.477118ms","start":"2025-12-02T16:15:54.361931Z","end":"2025-12-02T16:15:54.541408Z","steps":["trace[405495616] 'process raft request'  (duration: 179.331348ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T16:15:54.541584Z","caller":"traceutil/trace.go:172","msg":"trace[1599470208] transaction","detail":"{read_only:false; response_revision:39; number_of_response:1; }","duration":"180.026288ms","start":"2025-12-02T16:15:54.361540Z","end":"2025-12-02T16:15:54.541566Z","steps":["trace[1599470208] 'process raft request'  (duration: 116.118617ms)","trace[1599470208] 'compare'  (duration: 63.484133ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-02T16:16:05.333049Z","caller":"traceutil/trace.go:172","msg":"trace[1348432074] transaction","detail":"{read_only:false; response_revision:384; number_of_response:1; }","duration":"175.822957ms","start":"2025-12-02T16:16:05.157203Z","end":"2025-12-02T16:16:05.333026Z","steps":["trace[1348432074] 'process raft request'  (duration: 95.295784ms)","trace[1348432074] 'compare'  (duration: 80.33565ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T16:16:05.763812Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"212.982808ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.94.2\" limit:1 ","response":"range_response_count:1 size:131"}
	{"level":"warn","ts":"2025-12-02T16:16:05.763826Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.82112ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766567477930207 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kindnet-fn84j.187d722cbec69743\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kindnet-fn84j.187d722cbec69743\" value_size:626 lease:6571766567477929808 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-12-02T16:16:05.763888Z","caller":"traceutil/trace.go:172","msg":"trace[1476701408] range","detail":"{range_begin:/registry/masterleases/192.168.94.2; range_end:; response_count:1; response_revision:386; }","duration":"213.089846ms","start":"2025-12-02T16:16:05.550780Z","end":"2025-12-02T16:16:05.763870Z","steps":["trace[1476701408] 'agreement among raft nodes before linearized reading'  (duration: 84.068408ms)","trace[1476701408] 'range keys from in-memory index tree'  (duration: 128.804249ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-02T16:16:05.763968Z","caller":"traceutil/trace.go:172","msg":"trace[1952358566] transaction","detail":"{read_only:false; response_revision:387; number_of_response:1; }","duration":"257.833096ms","start":"2025-12-02T16:16:05.506110Z","end":"2025-12-02T16:16:05.763943Z","steps":["trace[1952358566] 'process raft request'  (duration: 128.848088ms)","trace[1952358566] 'compare'  (duration: 128.701041ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-02T16:16:05.964835Z","caller":"traceutil/trace.go:172","msg":"trace[524732130] transaction","detail":"{read_only:false; response_revision:388; number_of_response:1; }","duration":"193.640867ms","start":"2025-12-02T16:16:05.771172Z","end":"2025-12-02T16:16:05.964813Z","steps":["trace[524732130] 'process raft request'  (duration: 193.480056ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T16:16:06.101006Z","caller":"traceutil/trace.go:172","msg":"trace[542727582] linearizableReadLoop","detail":"{readStateIndex:401; appliedIndex:401; }","duration":"115.292723ms","start":"2025-12-02T16:16:05.985688Z","end":"2025-12-02T16:16:06.100981Z","steps":["trace[542727582] 'read index received'  (duration: 115.283774ms)","trace[542727582] 'applied index is now lower than readState.Index'  (duration: 7.378µs)"],"step_count":2}
	{"level":"info","ts":"2025-12-02T16:16:06.101251Z","caller":"traceutil/trace.go:172","msg":"trace[693320639] transaction","detail":"{read_only:false; response_revision:390; number_of_response:1; }","duration":"128.946398ms","start":"2025-12-02T16:16:05.972289Z","end":"2025-12-02T16:16:06.101236Z","steps":["trace[693320639] 'process raft request'  (duration: 128.808483ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-02T16:16:06.101261Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"115.553307ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" limit:1 ","response":"range_response_count:1 size:420"}
	{"level":"info","ts":"2025-12-02T16:16:06.101490Z","caller":"traceutil/trace.go:172","msg":"trace[872752476] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:389; }","duration":"115.797903ms","start":"2025-12-02T16:16:05.985680Z","end":"2025-12-02T16:16:06.101478Z","steps":["trace[872752476] 'agreement among raft nodes before linearized reading'  (duration: 115.447122ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T16:16:17.920950Z","caller":"traceutil/trace.go:172","msg":"trace[1907528536] transaction","detail":"{read_only:false; response_revision:425; number_of_response:1; }","duration":"116.076513ms","start":"2025-12-02T16:16:17.804851Z","end":"2025-12-02T16:16:17.920928Z","steps":["trace[1907528536] 'process raft request'  (duration: 90.029089ms)","trace[1907528536] 'compare'  (duration: 25.856682ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-02T16:16:18.068052Z","caller":"traceutil/trace.go:172","msg":"trace[1442638925] linearizableReadLoop","detail":"{readStateIndex:439; appliedIndex:439; }","duration":"128.361437ms","start":"2025-12-02T16:16:17.939664Z","end":"2025-12-02T16:16:18.068026Z","steps":["trace[1442638925] 'read index received'  (duration: 128.354429ms)","trace[1442638925] 'applied index is now lower than readState.Index'  (duration: 5.993µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T16:16:18.073570Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.88147ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-02T16:16:18.073615Z","caller":"traceutil/trace.go:172","msg":"trace[402324730] transaction","detail":"{read_only:false; response_revision:426; number_of_response:1; }","duration":"142.709365ms","start":"2025-12-02T16:16:17.930892Z","end":"2025-12-02T16:16:18.073602Z","steps":["trace[402324730] 'process raft request'  (duration: 137.266779ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T16:16:18.073632Z","caller":"traceutil/trace.go:172","msg":"trace[1983818423] range","detail":"{range_begin:/registry/minions; range_end:; response_count:0; response_revision:425; }","duration":"133.961456ms","start":"2025-12-02T16:16:17.939659Z","end":"2025-12-02T16:16:18.073621Z","steps":["trace[1983818423] 'agreement among raft nodes before linearized reading'  (duration: 128.456774ms)"],"step_count":1}
	
	
	==> kernel <==
	 16:16:31 up  2:58,  0 user,  load average: 5.50, 4.30, 2.63
	Linux no-preload-534842 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [22a74946bc19b782a99183a4d10b34e362acf7a29eb243bfcdcc769c998cbfea] <==
	I1202 16:16:05.713210       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 16:16:05.713515       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1202 16:16:05.713688       1 main.go:148] setting mtu 1500 for CNI 
	I1202 16:16:05.713707       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 16:16:05.713737       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T16:16:05Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 16:16:06.136929       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 16:16:06.136969       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 16:16:06.136988       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 16:16:06.137246       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1202 16:16:06.537577       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 16:16:06.537613       1 metrics.go:72] Registering metrics
	I1202 16:16:06.537692       1 controller.go:711] "Syncing nftables rules"
	I1202 16:16:15.968159       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1202 16:16:15.968229       1 main.go:301] handling current node
	I1202 16:16:25.967335       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1202 16:16:25.967391       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0cd4156acca2ff19272075af795bf4eb2dff1961eaa416a48eb6888d7addd485] <==
	I1202 16:15:53.949604       1 shared_informer.go:377] "Caches are synced"
	I1202 16:15:53.949621       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1202 16:15:53.953750       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1202 16:15:53.953767       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 16:15:53.972150       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 16:15:54.154462       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 16:15:54.854621       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1202 16:15:54.858985       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1202 16:15:54.859063       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1202 16:15:55.402793       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 16:15:55.449588       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 16:15:55.558929       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1202 16:15:55.566780       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1202 16:15:55.568287       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 16:15:55.572884       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 16:15:55.883748       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 16:15:56.741033       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1202 16:15:56.752018       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1202 16:15:56.767266       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1202 16:16:01.439848       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 16:16:01.448458       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 16:16:01.586066       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1202 16:16:01.884285       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1202 16:16:01.884285       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1202 16:16:29.138139       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:43060: use of closed network connection
	
	
	==> kube-controller-manager [21df59826aae16b3d60de4d4afa98c2a47cac6f1fc3af563fdd4348e73e6bf08] <==
	I1202 16:16:00.696737       1 shared_informer.go:377] "Caches are synced"
	I1202 16:16:00.696718       1 shared_informer.go:377] "Caches are synced"
	I1202 16:16:00.696947       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1202 16:16:00.697029       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-534842"
	I1202 16:16:00.697085       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1202 16:16:00.697377       1 shared_informer.go:377] "Caches are synced"
	I1202 16:16:00.697388       1 shared_informer.go:377] "Caches are synced"
	I1202 16:16:00.697399       1 shared_informer.go:377] "Caches are synced"
	I1202 16:16:00.697412       1 shared_informer.go:377] "Caches are synced"
	I1202 16:16:00.697403       1 shared_informer.go:377] "Caches are synced"
	I1202 16:16:00.697382       1 shared_informer.go:377] "Caches are synced"
	I1202 16:16:00.697471       1 shared_informer.go:377] "Caches are synced"
	I1202 16:16:00.697392       1 shared_informer.go:377] "Caches are synced"
	I1202 16:16:00.697496       1 shared_informer.go:377] "Caches are synced"
	I1202 16:16:00.698399       1 shared_informer.go:377] "Caches are synced"
	I1202 16:16:00.698477       1 shared_informer.go:377] "Caches are synced"
	I1202 16:16:00.698551       1 shared_informer.go:377] "Caches are synced"
	I1202 16:16:00.698587       1 shared_informer.go:377] "Caches are synced"
	I1202 16:16:00.704534       1 shared_informer.go:377] "Caches are synced"
	I1202 16:16:00.704854       1 range_allocator.go:433] "Set node PodCIDR" node="no-preload-534842" podCIDRs=["10.244.0.0/24"]
	I1202 16:16:00.794806       1 shared_informer.go:377] "Caches are synced"
	I1202 16:16:00.796985       1 shared_informer.go:377] "Caches are synced"
	I1202 16:16:00.797003       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1202 16:16:00.797010       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1202 16:16:20.701068       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [64c697cc0dd549fb9ae64996edc93abd533358944c3333e46d48fd5e6b447175] <==
	I1202 16:16:02.365024       1 server_linux.go:53] "Using iptables proxy"
	I1202 16:16:02.442136       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 16:16:02.542825       1 shared_informer.go:377] "Caches are synced"
	I1202 16:16:02.542867       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1202 16:16:02.542995       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 16:16:02.574256       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 16:16:02.574395       1 server_linux.go:136] "Using iptables Proxier"
	I1202 16:16:02.582483       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 16:16:02.582946       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1202 16:16:02.582968       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 16:16:02.584654       1 config.go:200] "Starting service config controller"
	I1202 16:16:02.584737       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 16:16:02.584802       1 config.go:106] "Starting endpoint slice config controller"
	I1202 16:16:02.584829       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 16:16:02.584883       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 16:16:02.584917       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 16:16:02.585888       1 config.go:309] "Starting node config controller"
	I1202 16:16:02.585915       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 16:16:02.585923       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 16:16:02.685958       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 16:16:02.685995       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1202 16:16:02.686014       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [53aec96f062748dcd0f67d5c4dd37a8ccfb7ff98312e0b58477f3ab525ca2e47] <==
	E1202 16:15:54.910586       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope"
	E1202 16:15:54.911525       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1202 16:15:54.937863       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope"
	E1202 16:15:54.938885       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1202 16:15:54.975370       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope"
	E1202 16:15:54.976758       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1202 16:15:55.049588       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope"
	E1202 16:15:55.050640       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1202 16:15:55.074804       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope"
	E1202 16:15:55.075744       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1202 16:15:55.096810       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope"
	E1202 16:15:55.097927       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1202 16:15:55.103965       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope"
	E1202 16:15:55.105019       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1202 16:15:55.105980       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope"
	E1202 16:15:55.106906       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1202 16:15:55.137357       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope"
	E1202 16:15:55.138391       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1202 16:15:55.139405       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope"
	E1202 16:15:55.140366       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1202 16:15:55.169641       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope"
	E1202 16:15:55.170746       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1202 16:15:55.364245       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1202 16:15:55.365544       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	I1202 16:15:58.607742       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 02 16:16:01 no-preload-534842 kubelet[2226]: I1202 16:16:01.966760    2226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d56d7371-0677-4746-972b-b3d24b8070f2-kube-proxy\") pod \"kube-proxy-xqnrx\" (UID: \"d56d7371-0677-4746-972b-b3d24b8070f2\") " pod="kube-system/kube-proxy-xqnrx"
	Dec 02 16:16:01 no-preload-534842 kubelet[2226]: I1202 16:16:01.966784    2226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d56d7371-0677-4746-972b-b3d24b8070f2-lib-modules\") pod \"kube-proxy-xqnrx\" (UID: \"d56d7371-0677-4746-972b-b3d24b8070f2\") " pod="kube-system/kube-proxy-xqnrx"
	Dec 02 16:16:01 no-preload-534842 kubelet[2226]: I1202 16:16:01.966973    2226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e8f80ec9-4aff-4de8-aa5a-e262160e51d7-xtables-lock\") pod \"kindnet-fn84j\" (UID: \"e8f80ec9-4aff-4de8-aa5a-e262160e51d7\") " pod="kube-system/kindnet-fn84j"
	Dec 02 16:16:01 no-preload-534842 kubelet[2226]: I1202 16:16:01.967025    2226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d56d7371-0677-4746-972b-b3d24b8070f2-xtables-lock\") pod \"kube-proxy-xqnrx\" (UID: \"d56d7371-0677-4746-972b-b3d24b8070f2\") " pod="kube-system/kube-proxy-xqnrx"
	Dec 02 16:16:01 no-preload-534842 kubelet[2226]: I1202 16:16:01.967057    2226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcchj\" (UniqueName: \"kubernetes.io/projected/d56d7371-0677-4746-972b-b3d24b8070f2-kube-api-access-gcchj\") pod \"kube-proxy-xqnrx\" (UID: \"d56d7371-0677-4746-972b-b3d24b8070f2\") " pod="kube-system/kube-proxy-xqnrx"
	Dec 02 16:16:04 no-preload-534842 kubelet[2226]: E1202 16:16:04.651207    2226 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-534842" containerName="kube-apiserver"
	Dec 02 16:16:04 no-preload-534842 kubelet[2226]: I1202 16:16:04.666970    2226 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-xqnrx" podStartSLOduration=3.666947497 podStartE2EDuration="3.666947497s" podCreationTimestamp="2025-12-02 16:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 16:16:02.692369165 +0000 UTC m=+6.156711539" watchObservedRunningTime="2025-12-02 16:16:04.666947497 +0000 UTC m=+8.131289873"
	Dec 02 16:16:05 no-preload-534842 kubelet[2226]: E1202 16:16:05.057305    2226 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-534842" containerName="etcd"
	Dec 02 16:16:07 no-preload-534842 kubelet[2226]: E1202 16:16:07.657515    2226 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-534842" containerName="kube-scheduler"
	Dec 02 16:16:07 no-preload-534842 kubelet[2226]: I1202 16:16:07.679165    2226 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-fn84j" podStartSLOduration=3.635948891 podStartE2EDuration="6.679149174s" podCreationTimestamp="2025-12-02 16:16:01 +0000 UTC" firstStartedPulling="2025-12-02 16:16:02.234056519 +0000 UTC m=+5.698398883" lastFinishedPulling="2025-12-02 16:16:05.277256807 +0000 UTC m=+8.741599166" observedRunningTime="2025-12-02 16:16:05.966662409 +0000 UTC m=+9.431004778" watchObservedRunningTime="2025-12-02 16:16:07.679149174 +0000 UTC m=+11.143491549"
	Dec 02 16:16:07 no-preload-534842 kubelet[2226]: E1202 16:16:07.693056    2226 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-534842" containerName="kube-scheduler"
	Dec 02 16:16:07 no-preload-534842 kubelet[2226]: E1202 16:16:07.986866    2226 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-534842" containerName="kube-controller-manager"
	Dec 02 16:16:14 no-preload-534842 kubelet[2226]: E1202 16:16:14.658998    2226 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-534842" containerName="kube-apiserver"
	Dec 02 16:16:15 no-preload-534842 kubelet[2226]: E1202 16:16:15.058904    2226 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-534842" containerName="etcd"
	Dec 02 16:16:16 no-preload-534842 kubelet[2226]: I1202 16:16:16.403081    2226 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 02 16:16:16 no-preload-534842 kubelet[2226]: I1202 16:16:16.471804    2226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/15ec190a-3c61-47f3-87a1-c5ab08d312b1-tmp\") pod \"storage-provisioner\" (UID: \"15ec190a-3c61-47f3-87a1-c5ab08d312b1\") " pod="kube-system/storage-provisioner"
	Dec 02 16:16:16 no-preload-534842 kubelet[2226]: I1202 16:16:16.471854    2226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7716bc36-76db-41a6-8acc-0025ea0b7787-config-volume\") pod \"coredns-7d764666f9-fxl4s\" (UID: \"7716bc36-76db-41a6-8acc-0025ea0b7787\") " pod="kube-system/coredns-7d764666f9-fxl4s"
	Dec 02 16:16:16 no-preload-534842 kubelet[2226]: I1202 16:16:16.471884    2226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsmbk\" (UniqueName: \"kubernetes.io/projected/7716bc36-76db-41a6-8acc-0025ea0b7787-kube-api-access-dsmbk\") pod \"coredns-7d764666f9-fxl4s\" (UID: \"7716bc36-76db-41a6-8acc-0025ea0b7787\") " pod="kube-system/coredns-7d764666f9-fxl4s"
	Dec 02 16:16:16 no-preload-534842 kubelet[2226]: I1202 16:16:16.471940    2226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqm72\" (UniqueName: \"kubernetes.io/projected/15ec190a-3c61-47f3-87a1-c5ab08d312b1-kube-api-access-xqm72\") pod \"storage-provisioner\" (UID: \"15ec190a-3c61-47f3-87a1-c5ab08d312b1\") " pod="kube-system/storage-provisioner"
	Dec 02 16:16:17 no-preload-534842 kubelet[2226]: E1202 16:16:17.717359    2226 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-fxl4s" containerName="coredns"
	Dec 02 16:16:17 no-preload-534842 kubelet[2226]: I1202 16:16:17.922561    2226 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.922538701 podStartE2EDuration="15.922538701s" podCreationTimestamp="2025-12-02 16:16:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 16:16:17.922300966 +0000 UTC m=+21.386643380" watchObservedRunningTime="2025-12-02 16:16:17.922538701 +0000 UTC m=+21.386881075"
	Dec 02 16:16:17 no-preload-534842 kubelet[2226]: I1202 16:16:17.922688    2226 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-fxl4s" podStartSLOduration=16.922680013 podStartE2EDuration="16.922680013s" podCreationTimestamp="2025-12-02 16:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 16:16:17.798448952 +0000 UTC m=+21.262791321" watchObservedRunningTime="2025-12-02 16:16:17.922680013 +0000 UTC m=+21.387022390"
	Dec 02 16:16:18 no-preload-534842 kubelet[2226]: E1202 16:16:18.721223    2226 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-fxl4s" containerName="coredns"
	Dec 02 16:16:19 no-preload-534842 kubelet[2226]: E1202 16:16:19.723807    2226 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-fxl4s" containerName="coredns"
	Dec 02 16:16:20 no-preload-534842 kubelet[2226]: I1202 16:16:20.097804    2226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r79f\" (UniqueName: \"kubernetes.io/projected/8068757f-9d6b-462a-901f-ba1d7b811746-kube-api-access-4r79f\") pod \"busybox\" (UID: \"8068757f-9d6b-462a-901f-ba1d7b811746\") " pod="default/busybox"
	
	
	==> storage-provisioner [9a8f4b3fc47d83c9088f785d79906b5cc97a32693ee1b562550a8b8843c3fea3] <==
	I1202 16:16:16.805569       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1202 16:16:16.815037       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1202 16:16:16.815098       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1202 16:16:16.817396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:16:16.822546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1202 16:16:16.822801       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1202 16:16:16.822892       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a7abbd92-cf96-41df-a57c-ebfe216540e2", APIVersion:"v1", ResourceVersion:"418", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-534842_8373928e-ce05-4fa6-b6ec-cd89f3190b63 became leader
	I1202 16:16:16.823012       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-534842_8373928e-ce05-4fa6-b6ec-cd89f3190b63!
	W1202 16:16:16.825251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:16:16.829552       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1202 16:16:16.924101       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-534842_8373928e-ce05-4fa6-b6ec-cd89f3190b63!
	W1202 16:16:18.832357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:16:18.841381       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:16:20.844769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:16:20.848958       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:16:22.852909       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:16:22.857747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:16:24.861418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:16:24.867728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:16:26.871542       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:16:26.875407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:16:28.878499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:16:28.882735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:16:30.885853       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:16:30.890028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-534842 -n no-preload-534842
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-534842 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-046271 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-046271 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (267.748407ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T16:16:53Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-046271 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-046271 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-046271 describe deploy/metrics-server -n kube-system: exit status 1 (67.043281ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-046271 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-046271
helpers_test.go:243: (dbg) docker inspect embed-certs-046271:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c270563500586ca0092f2a49edbeca9b376c5254a06f29ac0e88ce01fd93d310",
	        "Created": "2025-12-02T16:16:07.197943832Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 598323,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T16:16:07.236241128Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/c270563500586ca0092f2a49edbeca9b376c5254a06f29ac0e88ce01fd93d310/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c270563500586ca0092f2a49edbeca9b376c5254a06f29ac0e88ce01fd93d310/hostname",
	        "HostsPath": "/var/lib/docker/containers/c270563500586ca0092f2a49edbeca9b376c5254a06f29ac0e88ce01fd93d310/hosts",
	        "LogPath": "/var/lib/docker/containers/c270563500586ca0092f2a49edbeca9b376c5254a06f29ac0e88ce01fd93d310/c270563500586ca0092f2a49edbeca9b376c5254a06f29ac0e88ce01fd93d310-json.log",
	        "Name": "/embed-certs-046271",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-046271:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-046271",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c270563500586ca0092f2a49edbeca9b376c5254a06f29ac0e88ce01fd93d310",
	                "LowerDir": "/var/lib/docker/overlay2/12561fe812efc0a7100c7e89c65c08692ffbc64b594cbfd37abbc22239f7f12c-init/diff:/var/lib/docker/overlay2/ab98578cee54140c21ba2edb7c02601b9799fbaa027f05ce4daaae66d198c082/diff",
	                "MergedDir": "/var/lib/docker/overlay2/12561fe812efc0a7100c7e89c65c08692ffbc64b594cbfd37abbc22239f7f12c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/12561fe812efc0a7100c7e89c65c08692ffbc64b594cbfd37abbc22239f7f12c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/12561fe812efc0a7100c7e89c65c08692ffbc64b594cbfd37abbc22239f7f12c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-046271",
	                "Source": "/var/lib/docker/volumes/embed-certs-046271/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-046271",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-046271",
	                "name.minikube.sigs.k8s.io": "embed-certs-046271",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "bf4ad500dd3228b1ff038c0222fa679f5066e2bd2540202f6da3d7ce4c5390b0",
	            "SandboxKey": "/var/run/docker/netns/bf4ad500dd32",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33229"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33230"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33234"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33231"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33233"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-046271": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f242ea03e26ef86f9adac97f285055eeae57f7a447eb51a12604c316daba1ca0",
	                    "EndpointID": "c1e9f086c3e9db49c39ffbe2efaf11c2bb05a40a8b72e12cb0f0f55307234d05",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "6e:e8:2d:84:83:7b",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-046271",
	                        "c27056350058"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-046271 -n embed-certs-046271
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-046271 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-046271 logs -n 25: (1.078117888s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-589300 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ ssh     │ -p bridge-589300 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	│ ssh     │ -p bridge-589300 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ ssh     │ -p bridge-589300 sudo cri-dockerd --version                                                                                                                                                                                                   │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ ssh     │ -p bridge-589300 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	│ ssh     │ -p bridge-589300 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ ssh     │ -p bridge-589300 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ ssh     │ -p bridge-589300 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ ssh     │ -p bridge-589300 sudo containerd config dump                                                                                                                                                                                                  │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ ssh     │ -p bridge-589300 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ ssh     │ -p bridge-589300 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ ssh     │ -p bridge-589300 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ ssh     │ -p bridge-589300 sudo crio config                                                                                                                                                                                                             │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ delete  │ -p bridge-589300                                                                                                                                                                                                                              │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ delete  │ -p disable-driver-mounts-904481                                                                                                                                                                                                               │ disable-driver-mounts-904481 │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ start   │ -p default-k8s-diff-port-806420 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-380588 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	│ stop    │ -p old-k8s-version-380588 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ addons  │ enable metrics-server -p no-preload-534842 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	│ stop    │ -p no-preload-534842 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-380588 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ start   │ -p old-k8s-version-380588 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-534842 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ start   │ -p no-preload-534842 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-046271 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 16:16:48
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 16:16:48.506412  609654 out.go:360] Setting OutFile to fd 1 ...
	I1202 16:16:48.506571  609654 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:16:48.506582  609654 out.go:374] Setting ErrFile to fd 2...
	I1202 16:16:48.506589  609654 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:16:48.506879  609654 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 16:16:48.507376  609654 out.go:368] Setting JSON to false
	I1202 16:16:48.509008  609654 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":10749,"bootTime":1764681459,"procs":338,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 16:16:48.509078  609654 start.go:143] virtualization: kvm guest
	I1202 16:16:48.510917  609654 out.go:179] * [no-preload-534842] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 16:16:48.512835  609654 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 16:16:48.512869  609654 notify.go:221] Checking for updates...
	I1202 16:16:48.516434  609654 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 16:16:48.517763  609654 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 16:16:48.519097  609654 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-264555/.minikube
	I1202 16:16:48.520567  609654 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 16:16:48.521754  609654 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 16:16:48.523287  609654 config.go:182] Loaded profile config "no-preload-534842": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 16:16:48.523853  609654 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 16:16:48.550871  609654 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 16:16:48.551009  609654 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 16:16:48.620912  609654 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-02 16:16:48.60796411 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 16:16:48.621023  609654 docker.go:319] overlay module found
	I1202 16:16:48.622622  609654 out.go:179] * Using the docker driver based on existing profile
	I1202 16:16:48.623884  609654 start.go:309] selected driver: docker
	I1202 16:16:48.623900  609654 start.go:927] validating driver "docker" against &{Name:no-preload-534842 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-534842 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 16:16:48.623997  609654 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 16:16:48.624592  609654 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 16:16:48.696687  609654 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-02 16:16:48.684747804 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 16:16:48.697107  609654 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 16:16:48.697176  609654 cni.go:84] Creating CNI manager for ""
	I1202 16:16:48.697257  609654 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 16:16:48.697319  609654 start.go:353] cluster config:
	{Name:no-preload-534842 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-534842 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 16:16:48.699285  609654 out.go:179] * Starting "no-preload-534842" primary control-plane node in "no-preload-534842" cluster
	I1202 16:16:48.700592  609654 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 16:16:48.701972  609654 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 16:16:48.703838  609654 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 16:16:48.703932  609654 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 16:16:48.704033  609654 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/no-preload-534842/config.json ...
	I1202 16:16:48.704214  609654 cache.go:107] acquiring lock: {Name:mk6b8eeb5270fa67a5a87f892f37de1ae4805f75 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:16:48.704230  609654 cache.go:107] acquiring lock: {Name:mk821cef64e8468a2739d03d2e1019ac980bf2cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:16:48.704351  609654 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 16:16:48.704376  609654 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 183.092µs
	I1202 16:16:48.704369  609654 cache.go:107] acquiring lock: {Name:mkce5d795e0ca01a9ee3d674d001cd6e04bbbfba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:16:48.704397  609654 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 16:16:48.704337  609654 cache.go:107] acquiring lock: {Name:mk17b77bf762047097cbe060b18dc85ae78a9727 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:16:48.704405  609654 cache.go:107] acquiring lock: {Name:mk91bc91bcc535b3edd8200bf0c06e4d97781487 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:16:48.704450  609654 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1202 16:16:48.704461  609654 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 93.524µs
	I1202 16:16:48.704476  609654 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 16:16:48.704378  609654 cache.go:107] acquiring lock: {Name:mk3f4d40fdf359ce0573637a386f14c0a310cdc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:16:48.704479  609654 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 16:16:48.704513  609654 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 16:16:48.704511  609654 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 108.577µs
	I1202 16:16:48.704522  609654 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 16:16:48.704520  609654 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 146.53µs
	I1202 16:16:48.704510  609654 cache.go:107] acquiring lock: {Name:mka2aa325920dfb2720f9036278856e8dac95446 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:16:48.704337  609654 cache.go:107] acquiring lock: {Name:mkec45cdfdbdafc0ef1296b9d77662a50add1cdf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:16:48.704530  609654 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 16:16:48.704478  609654 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 16:16:48.704488  609654 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 160.017µs
	I1202 16:16:48.704572  609654 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 16:16:48.704577  609654 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 16:16:48.704586  609654 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 374.291µs
	I1202 16:16:48.704594  609654 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 16:16:48.704601  609654 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 16:16:48.704613  609654 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 16:16:48.704611  609654 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 105.545µs
	I1202 16:16:48.704626  609654 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 16:16:48.704626  609654 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 297.169µs
	I1202 16:16:48.704637  609654 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 16:16:48.704682  609654 cache.go:87] Successfully saved all images to host disk.
	I1202 16:16:48.732785  609654 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 16:16:48.732810  609654 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 16:16:48.732832  609654 cache.go:243] Successfully downloaded all kic artifacts
	I1202 16:16:48.732876  609654 start.go:360] acquireMachinesLock for no-preload-534842: {Name:mkaeda205abee8b126ec700e1149a8c091541425 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:16:48.732949  609654 start.go:364] duration metric: took 53.28µs to acquireMachinesLock for "no-preload-534842"
	I1202 16:16:48.732971  609654 start.go:96] Skipping create...Using existing machine configuration
	I1202 16:16:48.732978  609654 fix.go:54] fixHost starting: 
	I1202 16:16:48.733262  609654 cli_runner.go:164] Run: docker container inspect no-preload-534842 --format={{.State.Status}}
	I1202 16:16:48.757509  609654 fix.go:112] recreateIfNeeded on no-preload-534842: state=Stopped err=<nil>
	W1202 16:16:48.757584  609654 fix.go:138] unexpected machine state, will restart: <nil>
	W1202 16:16:46.398664  601673 node_ready.go:57] node "default-k8s-diff-port-806420" has "Ready":"False" status (will retry)
	W1202 16:16:48.899492  601673 node_ready.go:57] node "default-k8s-diff-port-806420" has "Ready":"False" status (will retry)
	I1202 16:16:47.373122  607516 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 16:16:47.373134  607516 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 16:16:47.373176  607516 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-380588"
	I1202 16:16:47.373193  607516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-380588
	W1202 16:16:47.373193  607516 addons.go:248] addon default-storageclass should already be in state true
	I1202 16:16:47.373228  607516 host.go:66] Checking if "old-k8s-version-380588" exists ...
	I1202 16:16:47.373743  607516 cli_runner.go:164] Run: docker container inspect old-k8s-version-380588 --format={{.State.Status}}
	I1202 16:16:47.374483  607516 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1202 16:16:47.374503  607516 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1202 16:16:47.374557  607516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-380588
	I1202 16:16:47.408132  607516 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 16:16:47.408158  607516 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 16:16:47.408219  607516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-380588
	I1202 16:16:47.408367  607516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33240 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/old-k8s-version-380588/id_rsa Username:docker}
	I1202 16:16:47.409084  607516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33240 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/old-k8s-version-380588/id_rsa Username:docker}
	I1202 16:16:47.434621  607516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33240 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/old-k8s-version-380588/id_rsa Username:docker}
	I1202 16:16:47.505722  607516 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 16:16:47.522936  607516 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-380588" to be "Ready" ...
	I1202 16:16:47.532242  607516 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1202 16:16:47.532267  607516 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1202 16:16:47.532837  607516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 16:16:47.549628  607516 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1202 16:16:47.549657  607516 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1202 16:16:47.555615  607516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 16:16:47.568223  607516 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1202 16:16:47.568253  607516 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1202 16:16:47.589160  607516 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1202 16:16:47.589192  607516 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1202 16:16:47.607593  607516 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1202 16:16:47.607646  607516 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1202 16:16:47.627818  607516 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1202 16:16:47.627846  607516 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1202 16:16:47.644526  607516 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1202 16:16:47.644558  607516 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1202 16:16:47.659184  607516 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1202 16:16:47.659214  607516 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1202 16:16:47.676461  607516 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 16:16:47.676492  607516 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1202 16:16:47.691295  607516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 16:16:49.367792  607516 node_ready.go:49] node "old-k8s-version-380588" is "Ready"
	I1202 16:16:49.367828  607516 node_ready.go:38] duration metric: took 1.844841164s for node "old-k8s-version-380588" to be "Ready" ...
	I1202 16:16:49.367845  607516 api_server.go:52] waiting for apiserver process to appear ...
	I1202 16:16:49.367897  607516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 16:16:50.084865  607516 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.551991622s)
	I1202 16:16:50.084934  607516 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.529296166s)
	I1202 16:16:50.450491  607516 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.759132616s)
	I1202 16:16:50.450579  607516 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.082648736s)
	I1202 16:16:50.450611  607516 api_server.go:72] duration metric: took 3.109232935s to wait for apiserver process to appear ...
	I1202 16:16:50.450617  607516 api_server.go:88] waiting for apiserver healthz status ...
	I1202 16:16:50.450640  607516 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1202 16:16:50.452526  607516 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-380588 addons enable metrics-server
	
	I1202 16:16:50.454211  607516 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1202 16:16:50.897800  601673 node_ready.go:49] node "default-k8s-diff-port-806420" is "Ready"
	I1202 16:16:50.897837  601673 node_ready.go:38] duration metric: took 11.503399371s for node "default-k8s-diff-port-806420" to be "Ready" ...
	I1202 16:16:50.897855  601673 api_server.go:52] waiting for apiserver process to appear ...
	I1202 16:16:50.897973  601673 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 16:16:50.913532  601673 api_server.go:72] duration metric: took 11.808468346s to wait for apiserver process to appear ...
	I1202 16:16:50.913567  601673 api_server.go:88] waiting for apiserver healthz status ...
	I1202 16:16:50.913592  601673 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1202 16:16:50.918265  601673 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1202 16:16:50.919692  601673 api_server.go:141] control plane version: v1.34.2
	I1202 16:16:50.919720  601673 api_server.go:131] duration metric: took 6.145345ms to wait for apiserver health ...
	I1202 16:16:50.919731  601673 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 16:16:50.923980  601673 system_pods.go:59] 8 kube-system pods found
	I1202 16:16:50.924035  601673 system_pods.go:61] "coredns-66bc5c9577-6h6nr" [7c832d8c-99dc-4663-a386-c48abaf9335e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 16:16:50.924047  601673 system_pods.go:61] "etcd-default-k8s-diff-port-806420" [e47c28bd-c4ac-417c-92e4-2ed52662c35b] Running
	I1202 16:16:50.924055  601673 system_pods.go:61] "kindnet-pc8st" [17b96563-2832-47ee-9d04-8e27db1a3c6b] Running
	I1202 16:16:50.924079  601673 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-806420" [44c28fe6-dea2-4f64-989d-d69480bc7988] Running
	I1202 16:16:50.924088  601673 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-806420" [6e6342da-debb-4021-8cb1-adec092a866a] Running
	I1202 16:16:50.924097  601673 system_pods.go:61] "kube-proxy-574km" [3766b4e1-7e00-4229-99a3-9eec486a3437] Running
	I1202 16:16:50.924116  601673 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-806420" [14951142-9cb5-4cf8-a095-d45123ec49da] Running
	I1202 16:16:50.924128  601673 system_pods.go:61] "storage-provisioner" [b3d4301c-a3b1-4c90-bb80-045b48b75011] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 16:16:50.924139  601673 system_pods.go:74] duration metric: took 4.400911ms to wait for pod list to return data ...
	I1202 16:16:50.924152  601673 default_sa.go:34] waiting for default service account to be created ...
	I1202 16:16:50.927078  601673 default_sa.go:45] found service account: "default"
	I1202 16:16:50.927100  601673 default_sa.go:55] duration metric: took 2.939101ms for default service account to be created ...
	I1202 16:16:50.927109  601673 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 16:16:50.929969  601673 system_pods.go:86] 8 kube-system pods found
	I1202 16:16:50.929996  601673 system_pods.go:89] "coredns-66bc5c9577-6h6nr" [7c832d8c-99dc-4663-a386-c48abaf9335e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 16:16:50.930001  601673 system_pods.go:89] "etcd-default-k8s-diff-port-806420" [e47c28bd-c4ac-417c-92e4-2ed52662c35b] Running
	I1202 16:16:50.930007  601673 system_pods.go:89] "kindnet-pc8st" [17b96563-2832-47ee-9d04-8e27db1a3c6b] Running
	I1202 16:16:50.930011  601673 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-806420" [44c28fe6-dea2-4f64-989d-d69480bc7988] Running
	I1202 16:16:50.930015  601673 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-806420" [6e6342da-debb-4021-8cb1-adec092a866a] Running
	I1202 16:16:50.930019  601673 system_pods.go:89] "kube-proxy-574km" [3766b4e1-7e00-4229-99a3-9eec486a3437] Running
	I1202 16:16:50.930022  601673 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-806420" [14951142-9cb5-4cf8-a095-d45123ec49da] Running
	I1202 16:16:50.930029  601673 system_pods.go:89] "storage-provisioner" [b3d4301c-a3b1-4c90-bb80-045b48b75011] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 16:16:50.930063  601673 retry.go:31] will retry after 275.463128ms: missing components: kube-dns
	I1202 16:16:51.210416  601673 system_pods.go:86] 8 kube-system pods found
	I1202 16:16:51.210481  601673 system_pods.go:89] "coredns-66bc5c9577-6h6nr" [7c832d8c-99dc-4663-a386-c48abaf9335e] Running
	I1202 16:16:51.210491  601673 system_pods.go:89] "etcd-default-k8s-diff-port-806420" [e47c28bd-c4ac-417c-92e4-2ed52662c35b] Running
	I1202 16:16:51.210501  601673 system_pods.go:89] "kindnet-pc8st" [17b96563-2832-47ee-9d04-8e27db1a3c6b] Running
	I1202 16:16:51.210507  601673 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-806420" [44c28fe6-dea2-4f64-989d-d69480bc7988] Running
	I1202 16:16:51.210515  601673 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-806420" [6e6342da-debb-4021-8cb1-adec092a866a] Running
	I1202 16:16:51.210521  601673 system_pods.go:89] "kube-proxy-574km" [3766b4e1-7e00-4229-99a3-9eec486a3437] Running
	I1202 16:16:51.210528  601673 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-806420" [14951142-9cb5-4cf8-a095-d45123ec49da] Running
	I1202 16:16:51.210535  601673 system_pods.go:89] "storage-provisioner" [b3d4301c-a3b1-4c90-bb80-045b48b75011] Running
	I1202 16:16:51.210547  601673 system_pods.go:126] duration metric: took 283.431625ms to wait for k8s-apps to be running ...
	I1202 16:16:51.210564  601673 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 16:16:51.210631  601673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 16:16:51.224133  601673 system_svc.go:56] duration metric: took 13.558472ms WaitForService to wait for kubelet
	I1202 16:16:51.224167  601673 kubeadm.go:587] duration metric: took 12.119111661s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 16:16:51.224189  601673 node_conditions.go:102] verifying NodePressure condition ...
	I1202 16:16:51.227262  601673 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 16:16:51.227289  601673 node_conditions.go:123] node cpu capacity is 8
	I1202 16:16:51.227306  601673 node_conditions.go:105] duration metric: took 3.11092ms to run NodePressure ...
	I1202 16:16:51.227321  601673 start.go:242] waiting for startup goroutines ...
	I1202 16:16:51.227332  601673 start.go:247] waiting for cluster config update ...
	I1202 16:16:51.227348  601673 start.go:256] writing updated cluster config ...
	I1202 16:16:51.227668  601673 ssh_runner.go:195] Run: rm -f paused
	I1202 16:16:51.231477  601673 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 16:16:51.235195  601673 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6h6nr" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:16:51.239606  601673 pod_ready.go:94] pod "coredns-66bc5c9577-6h6nr" is "Ready"
	I1202 16:16:51.239632  601673 pod_ready.go:86] duration metric: took 4.41532ms for pod "coredns-66bc5c9577-6h6nr" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:16:51.241799  601673 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-806420" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:16:51.245668  601673 pod_ready.go:94] pod "etcd-default-k8s-diff-port-806420" is "Ready"
	I1202 16:16:51.245694  601673 pod_ready.go:86] duration metric: took 3.871864ms for pod "etcd-default-k8s-diff-port-806420" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:16:51.247689  601673 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-806420" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:16:51.251364  601673 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-806420" is "Ready"
	I1202 16:16:51.251383  601673 pod_ready.go:86] duration metric: took 3.67643ms for pod "kube-apiserver-default-k8s-diff-port-806420" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:16:51.253305  601673 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-806420" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:16:51.635835  601673 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-806420" is "Ready"
	I1202 16:16:51.635867  601673 pod_ready.go:86] duration metric: took 382.541932ms for pod "kube-controller-manager-default-k8s-diff-port-806420" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:16:51.836538  601673 pod_ready.go:83] waiting for pod "kube-proxy-574km" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:16:52.236229  601673 pod_ready.go:94] pod "kube-proxy-574km" is "Ready"
	I1202 16:16:52.236255  601673 pod_ready.go:86] duration metric: took 399.693213ms for pod "kube-proxy-574km" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:16:52.437030  601673 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-806420" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:16:52.836314  601673 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-806420" is "Ready"
	I1202 16:16:52.836348  601673 pod_ready.go:86] duration metric: took 399.28942ms for pod "kube-scheduler-default-k8s-diff-port-806420" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:16:52.836361  601673 pod_ready.go:40] duration metric: took 1.604860526s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 16:16:52.884069  601673 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1202 16:16:52.885811  601673 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-806420" cluster and "default" namespace by default
	I1202 16:16:48.759589  609654 out.go:252] * Restarting existing docker container for "no-preload-534842" ...
	I1202 16:16:48.759682  609654 cli_runner.go:164] Run: docker start no-preload-534842
	I1202 16:16:49.058055  609654 cli_runner.go:164] Run: docker container inspect no-preload-534842 --format={{.State.Status}}
	I1202 16:16:49.079359  609654 kic.go:430] container "no-preload-534842" state is running.
	I1202 16:16:49.079821  609654 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-534842
	I1202 16:16:49.109879  609654 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/no-preload-534842/config.json ...
	I1202 16:16:49.110163  609654 machine.go:94] provisionDockerMachine start ...
	I1202 16:16:49.110248  609654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-534842
	I1202 16:16:49.132511  609654 main.go:143] libmachine: Using SSH client type: native
	I1202 16:16:49.132850  609654 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33245 <nil> <nil>}
	I1202 16:16:49.132870  609654 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 16:16:49.133759  609654 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38706->127.0.0.1:33245: read: connection reset by peer
	I1202 16:16:52.275690  609654 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-534842
	
	I1202 16:16:52.275726  609654 ubuntu.go:182] provisioning hostname "no-preload-534842"
	I1202 16:16:52.275785  609654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-534842
	I1202 16:16:52.295206  609654 main.go:143] libmachine: Using SSH client type: native
	I1202 16:16:52.295445  609654 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33245 <nil> <nil>}
	I1202 16:16:52.295467  609654 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-534842 && echo "no-preload-534842" | sudo tee /etc/hostname
	I1202 16:16:52.447590  609654 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-534842
	
	I1202 16:16:52.447685  609654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-534842
	I1202 16:16:52.465844  609654 main.go:143] libmachine: Using SSH client type: native
	I1202 16:16:52.466126  609654 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33245 <nil> <nil>}
	I1202 16:16:52.466145  609654 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-534842' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-534842/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-534842' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 16:16:52.608395  609654 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 16:16:52.608444  609654 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-264555/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-264555/.minikube}
	I1202 16:16:52.608499  609654 ubuntu.go:190] setting up certificates
	I1202 16:16:52.608512  609654 provision.go:84] configureAuth start
	I1202 16:16:52.608575  609654 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-534842
	I1202 16:16:52.628587  609654 provision.go:143] copyHostCerts
	I1202 16:16:52.628647  609654 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem, removing ...
	I1202 16:16:52.628679  609654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem
	I1202 16:16:52.628749  609654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem (1082 bytes)
	I1202 16:16:52.628854  609654 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem, removing ...
	I1202 16:16:52.628864  609654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem
	I1202 16:16:52.628892  609654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem (1123 bytes)
	I1202 16:16:52.628953  609654 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem, removing ...
	I1202 16:16:52.628960  609654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem
	I1202 16:16:52.628982  609654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem (1675 bytes)
	I1202 16:16:52.629033  609654 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem org=jenkins.no-preload-534842 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-534842]
	I1202 16:16:52.736299  609654 provision.go:177] copyRemoteCerts
	I1202 16:16:52.736365  609654 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 16:16:52.736402  609654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-534842
	I1202 16:16:52.754821  609654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33245 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/no-preload-534842/id_rsa Username:docker}
	I1202 16:16:52.856864  609654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 16:16:52.876063  609654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 16:16:52.895859  609654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1202 16:16:52.917681  609654 provision.go:87] duration metric: took 309.151106ms to configureAuth
	I1202 16:16:52.917713  609654 ubuntu.go:206] setting minikube options for container-runtime
	I1202 16:16:52.917948  609654 config.go:182] Loaded profile config "no-preload-534842": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 16:16:52.918065  609654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-534842
	I1202 16:16:52.939470  609654 main.go:143] libmachine: Using SSH client type: native
	I1202 16:16:52.939783  609654 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33245 <nil> <nil>}
	I1202 16:16:52.939817  609654 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 16:16:53.292084  609654 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 16:16:53.292119  609654 machine.go:97] duration metric: took 4.181935199s to provisionDockerMachine
	I1202 16:16:53.292134  609654 start.go:293] postStartSetup for "no-preload-534842" (driver="docker")
	I1202 16:16:53.292151  609654 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 16:16:53.292217  609654 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 16:16:53.292268  609654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-534842
	I1202 16:16:53.314292  609654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33245 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/no-preload-534842/id_rsa Username:docker}
	I1202 16:16:53.420588  609654 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 16:16:53.424747  609654 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 16:16:53.424780  609654 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 16:16:53.424793  609654 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-264555/.minikube/addons for local assets ...
	I1202 16:16:53.424848  609654 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-264555/.minikube/files for local assets ...
	I1202 16:16:53.424919  609654 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem -> 2680992.pem in /etc/ssl/certs
	I1202 16:16:53.425013  609654 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 16:16:53.434131  609654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem --> /etc/ssl/certs/2680992.pem (1708 bytes)
	I1202 16:16:53.456543  609654 start.go:296] duration metric: took 164.391677ms for postStartSetup
	I1202 16:16:53.456652  609654 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 16:16:53.456710  609654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-534842
	I1202 16:16:53.476607  609654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33245 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/no-preload-534842/id_rsa Username:docker}
	
	
	==> CRI-O <==
	Dec 02 16:16:41 embed-certs-046271 crio[774]: time="2025-12-02T16:16:41.057078068Z" level=info msg="Starting container: df4c58802ec1dd06c778631c16509e605f40acc82a672e4e4bb2fb4c3ad14509" id=3db4d681-e479-4414-ab63-2751b030a53a name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 16:16:41 embed-certs-046271 crio[774]: time="2025-12-02T16:16:41.0593176Z" level=info msg="Started container" PID=1836 containerID=df4c58802ec1dd06c778631c16509e605f40acc82a672e4e4bb2fb4c3ad14509 description=kube-system/coredns-66bc5c9577-f2vhx/coredns id=3db4d681-e479-4414-ab63-2751b030a53a name=/runtime.v1.RuntimeService/StartContainer sandboxID=cd9b3ff0261e4d355b1ef48adc82c12149a08d33b2c7b436a9be817ae8a5d462
	Dec 02 16:16:44 embed-certs-046271 crio[774]: time="2025-12-02T16:16:44.193706578Z" level=info msg="Running pod sandbox: default/busybox/POD" id=35cc1be1-5594-41f0-86d6-2d676da63b23 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 16:16:44 embed-certs-046271 crio[774]: time="2025-12-02T16:16:44.193786521Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:16:44 embed-certs-046271 crio[774]: time="2025-12-02T16:16:44.199249441Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:381a6bfc85dc90c6568b8bcd3927629b6a5f15a64a6632d14137ed9ddc192987 UID:20ecb04e-b6d3-4f0a-802c-8042502b49f9 NetNS:/var/run/netns/5c4ef9fa-4c55-459f-a7be-581f6a20344c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000490910}] Aliases:map[]}"
	Dec 02 16:16:44 embed-certs-046271 crio[774]: time="2025-12-02T16:16:44.199289587Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 02 16:16:44 embed-certs-046271 crio[774]: time="2025-12-02T16:16:44.209393556Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:381a6bfc85dc90c6568b8bcd3927629b6a5f15a64a6632d14137ed9ddc192987 UID:20ecb04e-b6d3-4f0a-802c-8042502b49f9 NetNS:/var/run/netns/5c4ef9fa-4c55-459f-a7be-581f6a20344c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000490910}] Aliases:map[]}"
	Dec 02 16:16:44 embed-certs-046271 crio[774]: time="2025-12-02T16:16:44.209568816Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 02 16:16:44 embed-certs-046271 crio[774]: time="2025-12-02T16:16:44.210507238Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 02 16:16:44 embed-certs-046271 crio[774]: time="2025-12-02T16:16:44.211319718Z" level=info msg="Ran pod sandbox 381a6bfc85dc90c6568b8bcd3927629b6a5f15a64a6632d14137ed9ddc192987 with infra container: default/busybox/POD" id=35cc1be1-5594-41f0-86d6-2d676da63b23 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 16:16:44 embed-certs-046271 crio[774]: time="2025-12-02T16:16:44.212688476Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6624678b-cecd-4fd3-b222-92e788c3f5b6 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:16:44 embed-certs-046271 crio[774]: time="2025-12-02T16:16:44.212833586Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=6624678b-cecd-4fd3-b222-92e788c3f5b6 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:16:44 embed-certs-046271 crio[774]: time="2025-12-02T16:16:44.212886731Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=6624678b-cecd-4fd3-b222-92e788c3f5b6 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:16:44 embed-certs-046271 crio[774]: time="2025-12-02T16:16:44.213752584Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=847e22db-25ed-444d-be4b-e36bc1a8725f name=/runtime.v1.ImageService/PullImage
	Dec 02 16:16:44 embed-certs-046271 crio[774]: time="2025-12-02T16:16:44.21690461Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 02 16:16:46 embed-certs-046271 crio[774]: time="2025-12-02T16:16:46.312054281Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=847e22db-25ed-444d-be4b-e36bc1a8725f name=/runtime.v1.ImageService/PullImage
	Dec 02 16:16:46 embed-certs-046271 crio[774]: time="2025-12-02T16:16:46.313013822Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f9eeeee3-0635-4b37-9550-30f736764720 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:16:46 embed-certs-046271 crio[774]: time="2025-12-02T16:16:46.31505908Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8ec54001-8f51-4757-b58f-e56f0cbb7f75 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:16:46 embed-certs-046271 crio[774]: time="2025-12-02T16:16:46.318771616Z" level=info msg="Creating container: default/busybox/busybox" id=4310e3cb-1cd7-4a88-bbf4-3d56623f011e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:16:46 embed-certs-046271 crio[774]: time="2025-12-02T16:16:46.318909562Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:16:46 embed-certs-046271 crio[774]: time="2025-12-02T16:16:46.322934085Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:16:46 embed-certs-046271 crio[774]: time="2025-12-02T16:16:46.323523218Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:16:46 embed-certs-046271 crio[774]: time="2025-12-02T16:16:46.364225245Z" level=info msg="Created container 1e0574689cb53b917747bec8fa00a953309020c60887ab5ed58f1dd6d47e6ef8: default/busybox/busybox" id=4310e3cb-1cd7-4a88-bbf4-3d56623f011e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:16:46 embed-certs-046271 crio[774]: time="2025-12-02T16:16:46.365004046Z" level=info msg="Starting container: 1e0574689cb53b917747bec8fa00a953309020c60887ab5ed58f1dd6d47e6ef8" id=e217738b-af8f-4a37-8abb-a8cf8a365f04 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 16:16:46 embed-certs-046271 crio[774]: time="2025-12-02T16:16:46.367091327Z" level=info msg="Started container" PID=1905 containerID=1e0574689cb53b917747bec8fa00a953309020c60887ab5ed58f1dd6d47e6ef8 description=default/busybox/busybox id=e217738b-af8f-4a37-8abb-a8cf8a365f04 name=/runtime.v1.RuntimeService/StartContainer sandboxID=381a6bfc85dc90c6568b8bcd3927629b6a5f15a64a6632d14137ed9ddc192987
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	1e0574689cb53       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   381a6bfc85dc9       busybox                                      default
	df4c58802ec1d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      13 seconds ago      Running             coredns                   0                   cd9b3ff0261e4       coredns-66bc5c9577-f2vhx                     kube-system
	7e60ab5c23902       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   f33b1d476e715       storage-provisioner                          kube-system
	d6ef345404b56       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      24 seconds ago      Running             kindnet-cni               0                   9a2d21bbc1f10       kindnet-wpj6k                                kube-system
	1be198b8aecac       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                      24 seconds ago      Running             kube-proxy                0                   9a05dc860c15e       kube-proxy-q9pxb                             kube-system
	73e5c392e28ac       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                      35 seconds ago      Running             kube-controller-manager   0                   1e8a3588e952c       kube-controller-manager-embed-certs-046271   kube-system
	4071de9b75369       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                      35 seconds ago      Running             kube-apiserver            0                   a45c6f3690098       kube-apiserver-embed-certs-046271            kube-system
	75b1b72eb0145       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                      35 seconds ago      Running             kube-scheduler            0                   f3d4caa7c5d71       kube-scheduler-embed-certs-046271            kube-system
	cc80d1042d93b       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      35 seconds ago      Running             etcd                      0                   4f43ef713c052       etcd-embed-certs-046271                      kube-system
	
	
	==> coredns [df4c58802ec1dd06c778631c16509e605f40acc82a672e4e4bb2fb4c3ad14509] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59722 - 34704 "HINFO IN 603000636744250854.1910990674931623281. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.073579914s
	
	
	==> describe nodes <==
	Name:               embed-certs-046271
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-046271
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=embed-certs-046271
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T16_16_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 16:16:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-046271
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 16:16:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 16:16:40 +0000   Tue, 02 Dec 2025 16:16:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 16:16:40 +0000   Tue, 02 Dec 2025 16:16:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 16:16:40 +0000   Tue, 02 Dec 2025 16:16:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 16:16:40 +0000   Tue, 02 Dec 2025 16:16:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-046271
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                e2b6e9a3-1779-45e2-a9a6-d48b0dea91ba
	  Boot ID:                    e00bac56-b076-4861-bc22-5d3b11269f73
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-f2vhx                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-embed-certs-046271                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-wpj6k                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-embed-certs-046271             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-embed-certs-046271    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-q9pxb                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-embed-certs-046271             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24s   kube-proxy       
	  Normal  Starting                 31s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s   kubelet          Node embed-certs-046271 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s   kubelet          Node embed-certs-046271 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s   kubelet          Node embed-certs-046271 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node embed-certs-046271 event: Registered Node embed-certs-046271 in Controller
	  Normal  NodeReady                14s   kubelet          Node embed-certs-046271 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000023] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[Dec 2 16:14] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ca bc 15 8e 4f 39 08 06
	[  +0.202375] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4a 25 86 21 45 76 08 06
	[  +7.441346] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 50 97 74 77 f9 08 06
	[  +0.000311] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 8c 8a 4d de f7 08 06
	[Dec 2 16:15] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 87 56 d2 46 1b 08 06
	[  +0.000909] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4a 25 86 21 45 76 08 06
	[  +7.449328] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a 06 ef 04 0a 22 08 06
	[ +17.731920] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff ae 8e 5c 48 83 60 08 06
	[  +2.165442] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0e 0b db fb 54 af 08 06
	[  +0.000320] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 3a 06 ef 04 0a 22 08 06
	[ +14.651928] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 5d 2d 15 78 ec 08 06
	[  +0.000385] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 8e 5c 48 83 60 08 06
	
	
	==> etcd [cc80d1042d93b118832b338e36e8f742b37f623e113ca59fd1a6560252d56dda] <==
	{"level":"warn","ts":"2025-12-02T16:16:20.584533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:20.592212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:20.604684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:20.614130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:20.621933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:20.628530Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:20.636152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:20.643983Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:20.652120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:20.659932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:20.668891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:20.676471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:20.683861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:20.690925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:20.698069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:20.708332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:20.715492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:20.723534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:20.739547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:20.747731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:20.755908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:20.781359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:20.790152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:20.797385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:20.851985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54370","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 16:16:54 up  2:59,  0 user,  load average: 3.87, 4.01, 2.58
	Linux embed-certs-046271 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d6ef345404b56007070b7c1306548e83f72757ba82e159087b6a29b2e8f38eaf] <==
	I1202 16:16:29.863130       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 16:16:29.863490       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1202 16:16:29.863747       1 main.go:148] setting mtu 1500 for CNI 
	I1202 16:16:29.863804       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 16:16:29.863830       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T16:16:30Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 16:16:30.159666       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 16:16:30.159735       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 16:16:30.159746       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 16:16:30.159909       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1202 16:16:30.359997       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 16:16:30.360023       1 metrics.go:72] Registering metrics
	I1202 16:16:30.360084       1 controller.go:711] "Syncing nftables rules"
	I1202 16:16:40.164374       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1202 16:16:40.164413       1 main.go:301] handling current node
	I1202 16:16:50.159627       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1202 16:16:50.159668       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4071de9b75369b9a06bf6e289db4f4e530b05822defcfd28a1d821ed6d532e4e] <==
	I1202 16:16:21.380771       1 controller.go:667] quota admission added evaluator for: namespaces
	I1202 16:16:21.385331       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1202 16:16:21.388754       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 16:16:21.391634       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 16:16:21.392015       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1202 16:16:21.412349       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 16:16:22.285728       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1202 16:16:22.290384       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1202 16:16:22.290559       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1202 16:16:22.888189       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 16:16:22.930986       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 16:16:22.992551       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1202 16:16:22.999639       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1202 16:16:23.001065       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 16:16:23.006057       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 16:16:23.304180       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 16:16:24.049769       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1202 16:16:24.059900       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1202 16:16:24.068613       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1202 16:16:28.308447       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1202 16:16:29.161413       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1202 16:16:29.161413       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1202 16:16:29.556716       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 16:16:29.577473       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1202 16:16:53.003362       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:50990: use of closed network connection
	
	
	==> kube-controller-manager [73e5c392e28acde0a33f3cf99e1da832f7f4fb4a7b1662d6b889b903e0a7634d] <==
	I1202 16:16:28.302101       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1202 16:16:28.302137       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1202 16:16:28.302153       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 16:16:28.302167       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1202 16:16:28.302178       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1202 16:16:28.302249       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1202 16:16:28.302398       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-046271"
	I1202 16:16:28.302484       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1202 16:16:28.302489       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1202 16:16:28.302689       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1202 16:16:28.302808       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1202 16:16:28.303564       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1202 16:16:28.303600       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1202 16:16:28.303625       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1202 16:16:28.303803       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1202 16:16:28.303870       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1202 16:16:28.304170       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1202 16:16:28.304618       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1202 16:16:28.308952       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 16:16:28.310015       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1202 16:16:28.315106       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 16:16:28.319463       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1202 16:16:28.327703       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1202 16:16:28.332877       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 16:16:43.305794       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [1be198b8aecacf3b2d16cd5f53b3382f74f36aceabd280d93dc55000b4f0a2ad] <==
	I1202 16:16:29.721177       1 server_linux.go:53] "Using iptables proxy"
	I1202 16:16:29.804966       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 16:16:29.906540       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 16:16:29.906589       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1202 16:16:29.906678       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 16:16:29.941092       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 16:16:29.941198       1 server_linux.go:132] "Using iptables Proxier"
	I1202 16:16:29.946400       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 16:16:29.946878       1 server.go:527] "Version info" version="v1.34.2"
	I1202 16:16:29.947244       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 16:16:29.950155       1 config.go:200] "Starting service config controller"
	I1202 16:16:29.951361       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 16:16:29.951457       1 config.go:106] "Starting endpoint slice config controller"
	I1202 16:16:29.951488       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 16:16:29.951523       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 16:16:29.951557       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 16:16:29.951808       1 config.go:309] "Starting node config controller"
	I1202 16:16:29.951845       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 16:16:29.951869       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 16:16:30.052306       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 16:16:30.052348       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 16:16:30.052361       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [75b1b72eb0145b93e67c985c2b67ff0efafd113f38249b07f9dcecd0604e5b1c] <==
	E1202 16:16:21.337988       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1202 16:16:21.338493       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1202 16:16:21.338736       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1202 16:16:21.338818       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1202 16:16:21.338894       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1202 16:16:21.338928       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1202 16:16:21.338970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1202 16:16:21.338982       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1202 16:16:21.338986       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1202 16:16:21.339034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1202 16:16:21.339261       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1202 16:16:21.339291       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1202 16:16:22.174634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1202 16:16:22.184049       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1202 16:16:22.205634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1202 16:16:22.272804       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1202 16:16:22.356959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1202 16:16:22.378566       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1202 16:16:22.427643       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1202 16:16:22.476573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1202 16:16:22.505828       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1202 16:16:22.569559       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1202 16:16:22.651708       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1202 16:16:22.797055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1202 16:16:25.932969       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 02 16:16:25 embed-certs-046271 kubelet[1319]: I1202 16:16:25.052961    1319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-046271" podStartSLOduration=2.052934139 podStartE2EDuration="2.052934139s" podCreationTimestamp="2025-12-02 16:16:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 16:16:25.031044359 +0000 UTC m=+1.226017226" watchObservedRunningTime="2025-12-02 16:16:25.052934139 +0000 UTC m=+1.247907004"
	Dec 02 16:16:25 embed-certs-046271 kubelet[1319]: I1202 16:16:25.065949    1319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-046271" podStartSLOduration=2.065924295 podStartE2EDuration="2.065924295s" podCreationTimestamp="2025-12-02 16:16:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 16:16:25.053806846 +0000 UTC m=+1.248779711" watchObservedRunningTime="2025-12-02 16:16:25.065924295 +0000 UTC m=+1.260897164"
	Dec 02 16:16:25 embed-certs-046271 kubelet[1319]: I1202 16:16:25.066080    1319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-046271" podStartSLOduration=1.066071665 podStartE2EDuration="1.066071665s" podCreationTimestamp="2025-12-02 16:16:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 16:16:25.066039217 +0000 UTC m=+1.261012084" watchObservedRunningTime="2025-12-02 16:16:25.066071665 +0000 UTC m=+1.261044531"
	Dec 02 16:16:25 embed-certs-046271 kubelet[1319]: I1202 16:16:25.089670    1319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-046271" podStartSLOduration=1.089644025 podStartE2EDuration="1.089644025s" podCreationTimestamp="2025-12-02 16:16:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 16:16:25.07817083 +0000 UTC m=+1.273143696" watchObservedRunningTime="2025-12-02 16:16:25.089644025 +0000 UTC m=+1.284616889"
	Dec 02 16:16:28 embed-certs-046271 kubelet[1319]: I1202 16:16:28.360797    1319 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 02 16:16:28 embed-certs-046271 kubelet[1319]: I1202 16:16:28.361556    1319 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 02 16:16:29 embed-certs-046271 kubelet[1319]: I1202 16:16:29.241707    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/85574988-c836-4351-80bf-92683e782d91-kube-proxy\") pod \"kube-proxy-q9pxb\" (UID: \"85574988-c836-4351-80bf-92683e782d91\") " pod="kube-system/kube-proxy-q9pxb"
	Dec 02 16:16:29 embed-certs-046271 kubelet[1319]: I1202 16:16:29.241788    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9249e8d2-e10c-4cae-bf04-cbf331109cf5-xtables-lock\") pod \"kindnet-wpj6k\" (UID: \"9249e8d2-e10c-4cae-bf04-cbf331109cf5\") " pod="kube-system/kindnet-wpj6k"
	Dec 02 16:16:29 embed-certs-046271 kubelet[1319]: I1202 16:16:29.241811    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9249e8d2-e10c-4cae-bf04-cbf331109cf5-cni-cfg\") pod \"kindnet-wpj6k\" (UID: \"9249e8d2-e10c-4cae-bf04-cbf331109cf5\") " pod="kube-system/kindnet-wpj6k"
	Dec 02 16:16:29 embed-certs-046271 kubelet[1319]: I1202 16:16:29.241836    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9249e8d2-e10c-4cae-bf04-cbf331109cf5-lib-modules\") pod \"kindnet-wpj6k\" (UID: \"9249e8d2-e10c-4cae-bf04-cbf331109cf5\") " pod="kube-system/kindnet-wpj6k"
	Dec 02 16:16:29 embed-certs-046271 kubelet[1319]: I1202 16:16:29.241858    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mvth\" (UniqueName: \"kubernetes.io/projected/9249e8d2-e10c-4cae-bf04-cbf331109cf5-kube-api-access-9mvth\") pod \"kindnet-wpj6k\" (UID: \"9249e8d2-e10c-4cae-bf04-cbf331109cf5\") " pod="kube-system/kindnet-wpj6k"
	Dec 02 16:16:29 embed-certs-046271 kubelet[1319]: I1202 16:16:29.241883    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/85574988-c836-4351-80bf-92683e782d91-xtables-lock\") pod \"kube-proxy-q9pxb\" (UID: \"85574988-c836-4351-80bf-92683e782d91\") " pod="kube-system/kube-proxy-q9pxb"
	Dec 02 16:16:29 embed-certs-046271 kubelet[1319]: I1202 16:16:29.241901    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/85574988-c836-4351-80bf-92683e782d91-lib-modules\") pod \"kube-proxy-q9pxb\" (UID: \"85574988-c836-4351-80bf-92683e782d91\") " pod="kube-system/kube-proxy-q9pxb"
	Dec 02 16:16:29 embed-certs-046271 kubelet[1319]: I1202 16:16:29.241925    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98pqr\" (UniqueName: \"kubernetes.io/projected/85574988-c836-4351-80bf-92683e782d91-kube-api-access-98pqr\") pod \"kube-proxy-q9pxb\" (UID: \"85574988-c836-4351-80bf-92683e782d91\") " pod="kube-system/kube-proxy-q9pxb"
	Dec 02 16:16:29 embed-certs-046271 kubelet[1319]: I1202 16:16:29.990130    1319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-wpj6k" podStartSLOduration=0.990106254 podStartE2EDuration="990.106254ms" podCreationTimestamp="2025-12-02 16:16:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 16:16:29.977460106 +0000 UTC m=+6.172432971" watchObservedRunningTime="2025-12-02 16:16:29.990106254 +0000 UTC m=+6.185079117"
	Dec 02 16:16:30 embed-certs-046271 kubelet[1319]: I1202 16:16:30.006267    1319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-q9pxb" podStartSLOduration=1.006242482 podStartE2EDuration="1.006242482s" podCreationTimestamp="2025-12-02 16:16:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 16:16:29.990491206 +0000 UTC m=+6.185464072" watchObservedRunningTime="2025-12-02 16:16:30.006242482 +0000 UTC m=+6.201215349"
	Dec 02 16:16:40 embed-certs-046271 kubelet[1319]: I1202 16:16:40.659170    1319 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 02 16:16:40 embed-certs-046271 kubelet[1319]: I1202 16:16:40.725498    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5mk4\" (UniqueName: \"kubernetes.io/projected/5a625bd8-b8b8-4abc-b86a-d39218c7ffe3-kube-api-access-n5mk4\") pod \"storage-provisioner\" (UID: \"5a625bd8-b8b8-4abc-b86a-d39218c7ffe3\") " pod="kube-system/storage-provisioner"
	Dec 02 16:16:40 embed-certs-046271 kubelet[1319]: I1202 16:16:40.725595    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5a625bd8-b8b8-4abc-b86a-d39218c7ffe3-tmp\") pod \"storage-provisioner\" (UID: \"5a625bd8-b8b8-4abc-b86a-d39218c7ffe3\") " pod="kube-system/storage-provisioner"
	Dec 02 16:16:40 embed-certs-046271 kubelet[1319]: I1202 16:16:40.725624    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/364e193c-f53a-4a43-b365-fe8364c3bd0f-config-volume\") pod \"coredns-66bc5c9577-f2vhx\" (UID: \"364e193c-f53a-4a43-b365-fe8364c3bd0f\") " pod="kube-system/coredns-66bc5c9577-f2vhx"
	Dec 02 16:16:40 embed-certs-046271 kubelet[1319]: I1202 16:16:40.725648    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n222m\" (UniqueName: \"kubernetes.io/projected/364e193c-f53a-4a43-b365-fe8364c3bd0f-kube-api-access-n222m\") pod \"coredns-66bc5c9577-f2vhx\" (UID: \"364e193c-f53a-4a43-b365-fe8364c3bd0f\") " pod="kube-system/coredns-66bc5c9577-f2vhx"
	Dec 02 16:16:42 embed-certs-046271 kubelet[1319]: I1202 16:16:42.015891    1319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-f2vhx" podStartSLOduration=13.015868158 podStartE2EDuration="13.015868158s" podCreationTimestamp="2025-12-02 16:16:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 16:16:42.003590425 +0000 UTC m=+18.198563303" watchObservedRunningTime="2025-12-02 16:16:42.015868158 +0000 UTC m=+18.210841025"
	Dec 02 16:16:43 embed-certs-046271 kubelet[1319]: I1202 16:16:43.887051    1319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.887026212 podStartE2EDuration="14.887026212s" podCreationTimestamp="2025-12-02 16:16:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 16:16:42.025721293 +0000 UTC m=+18.220694161" watchObservedRunningTime="2025-12-02 16:16:43.887026212 +0000 UTC m=+20.081999078"
	Dec 02 16:16:43 embed-certs-046271 kubelet[1319]: I1202 16:16:43.947290    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqjfh\" (UniqueName: \"kubernetes.io/projected/20ecb04e-b6d3-4f0a-802c-8042502b49f9-kube-api-access-wqjfh\") pod \"busybox\" (UID: \"20ecb04e-b6d3-4f0a-802c-8042502b49f9\") " pod="default/busybox"
	Dec 02 16:16:47 embed-certs-046271 kubelet[1319]: I1202 16:16:47.019280    1319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.918193585 podStartE2EDuration="4.019261174s" podCreationTimestamp="2025-12-02 16:16:43 +0000 UTC" firstStartedPulling="2025-12-02 16:16:44.213239048 +0000 UTC m=+20.408211905" lastFinishedPulling="2025-12-02 16:16:46.314306632 +0000 UTC m=+22.509279494" observedRunningTime="2025-12-02 16:16:47.019031968 +0000 UTC m=+23.214004834" watchObservedRunningTime="2025-12-02 16:16:47.019261174 +0000 UTC m=+23.214234039"
	
	
	==> storage-provisioner [7e60ab5c2390261b43d2a248b00995e02bd4c7ee596fd35277dc7ff9e57cc367] <==
	I1202 16:16:41.042294       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1202 16:16:41.049235       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1202 16:16:41.049302       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1202 16:16:41.051707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:16:41.057312       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1202 16:16:41.057548       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1202 16:16:41.057653       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"11ce2c3e-17d1-4723-87f4-2086c94d5f48", APIVersion:"v1", ResourceVersion:"404", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-046271_02c69cfb-1614-451c-98e6-d1080d3c892f became leader
	I1202 16:16:41.058014       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-046271_02c69cfb-1614-451c-98e6-d1080d3c892f!
	W1202 16:16:41.061047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:16:41.064840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1202 16:16:41.158510       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-046271_02c69cfb-1614-451c-98e6-d1080d3c892f!
	W1202 16:16:43.067849       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:16:43.072134       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:16:45.076114       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:16:45.081577       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:16:47.086128       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:16:47.091478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:16:49.096758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:16:49.111950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:16:51.115491       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:16:51.119585       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:16:53.123638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:16:53.129341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-046271 -n embed-certs-046271
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-046271 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-806420 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-806420 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (293.86782ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T16:17:02Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-806420 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-806420 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-806420 describe deploy/metrics-server -n kube-system: exit status 1 (89.796115ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-806420 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-806420
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-806420:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "11de8b8d47119303090fb424ba8db00144940cdfc7fe8b446b8b50d3106ff09b",
	        "Created": "2025-12-02T16:16:19.182047028Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 602512,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T16:16:19.231798929Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/11de8b8d47119303090fb424ba8db00144940cdfc7fe8b446b8b50d3106ff09b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/11de8b8d47119303090fb424ba8db00144940cdfc7fe8b446b8b50d3106ff09b/hostname",
	        "HostsPath": "/var/lib/docker/containers/11de8b8d47119303090fb424ba8db00144940cdfc7fe8b446b8b50d3106ff09b/hosts",
	        "LogPath": "/var/lib/docker/containers/11de8b8d47119303090fb424ba8db00144940cdfc7fe8b446b8b50d3106ff09b/11de8b8d47119303090fb424ba8db00144940cdfc7fe8b446b8b50d3106ff09b-json.log",
	        "Name": "/default-k8s-diff-port-806420",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-806420:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-806420",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "11de8b8d47119303090fb424ba8db00144940cdfc7fe8b446b8b50d3106ff09b",
	                "LowerDir": "/var/lib/docker/overlay2/798271a5d7d9be2b9f56abfd6664a7da5e191a27e4202bc76317776877650382-init/diff:/var/lib/docker/overlay2/ab98578cee54140c21ba2edb7c02601b9799fbaa027f05ce4daaae66d198c082/diff",
	                "MergedDir": "/var/lib/docker/overlay2/798271a5d7d9be2b9f56abfd6664a7da5e191a27e4202bc76317776877650382/merged",
	                "UpperDir": "/var/lib/docker/overlay2/798271a5d7d9be2b9f56abfd6664a7da5e191a27e4202bc76317776877650382/diff",
	                "WorkDir": "/var/lib/docker/overlay2/798271a5d7d9be2b9f56abfd6664a7da5e191a27e4202bc76317776877650382/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-806420",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-806420/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-806420",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-806420",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-806420",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "8edbe82f80655a61c5ebb9ca04ae356f7987d3ca85b6d0ad970c55a595210f9d",
	            "SandboxKey": "/var/run/docker/netns/8edbe82f8065",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33235"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33236"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33239"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33237"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33238"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-806420": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "71c0f0496cc56b89da0cbf1f1c56db8adab9c786627f80a5f88bceb2579ed18f",
	                    "EndpointID": "40ffdc80bc25a1f3f582d4a1618ac93678d1b9750f8849accc374308fae38373",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "de:35:d4:16:d2:27",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-806420",
	                        "11de8b8d4711"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-806420 -n default-k8s-diff-port-806420
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-806420 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-806420 logs -n 25: (1.420067711s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-589300 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ ssh     │ -p bridge-589300 sudo cri-dockerd --version                                                                                                                                                                                                   │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ ssh     │ -p bridge-589300 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	│ ssh     │ -p bridge-589300 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ ssh     │ -p bridge-589300 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ ssh     │ -p bridge-589300 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ ssh     │ -p bridge-589300 sudo containerd config dump                                                                                                                                                                                                  │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ ssh     │ -p bridge-589300 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ ssh     │ -p bridge-589300 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ ssh     │ -p bridge-589300 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ ssh     │ -p bridge-589300 sudo crio config                                                                                                                                                                                                             │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ delete  │ -p bridge-589300                                                                                                                                                                                                                              │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ delete  │ -p disable-driver-mounts-904481                                                                                                                                                                                                               │ disable-driver-mounts-904481 │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ start   │ -p default-k8s-diff-port-806420 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-380588 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	│ stop    │ -p old-k8s-version-380588 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ addons  │ enable metrics-server -p no-preload-534842 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	│ stop    │ -p no-preload-534842 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-380588 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ start   │ -p old-k8s-version-380588 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-534842 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ start   │ -p no-preload-534842 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-046271 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	│ stop    │ -p embed-certs-046271 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-806420 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 16:16:48
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 16:16:48.506412  609654 out.go:360] Setting OutFile to fd 1 ...
	I1202 16:16:48.506571  609654 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:16:48.506582  609654 out.go:374] Setting ErrFile to fd 2...
	I1202 16:16:48.506589  609654 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:16:48.506879  609654 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 16:16:48.507376  609654 out.go:368] Setting JSON to false
	I1202 16:16:48.509008  609654 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":10749,"bootTime":1764681459,"procs":338,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 16:16:48.509078  609654 start.go:143] virtualization: kvm guest
	I1202 16:16:48.510917  609654 out.go:179] * [no-preload-534842] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 16:16:48.512835  609654 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 16:16:48.512869  609654 notify.go:221] Checking for updates...
	I1202 16:16:48.516434  609654 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 16:16:48.517763  609654 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 16:16:48.519097  609654 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-264555/.minikube
	I1202 16:16:48.520567  609654 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 16:16:48.521754  609654 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 16:16:48.523287  609654 config.go:182] Loaded profile config "no-preload-534842": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 16:16:48.523853  609654 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 16:16:48.550871  609654 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 16:16:48.551009  609654 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 16:16:48.620912  609654 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-02 16:16:48.60796411 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 16:16:48.621023  609654 docker.go:319] overlay module found
	I1202 16:16:48.622622  609654 out.go:179] * Using the docker driver based on existing profile
	I1202 16:16:48.623884  609654 start.go:309] selected driver: docker
	I1202 16:16:48.623900  609654 start.go:927] validating driver "docker" against &{Name:no-preload-534842 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-534842 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 16:16:48.623997  609654 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 16:16:48.624592  609654 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 16:16:48.696687  609654 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-02 16:16:48.684747804 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 16:16:48.697107  609654 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 16:16:48.697176  609654 cni.go:84] Creating CNI manager for ""
	I1202 16:16:48.697257  609654 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 16:16:48.697319  609654 start.go:353] cluster config:
	{Name:no-preload-534842 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-534842 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 16:16:48.699285  609654 out.go:179] * Starting "no-preload-534842" primary control-plane node in "no-preload-534842" cluster
	I1202 16:16:48.700592  609654 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 16:16:48.701972  609654 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 16:16:48.703838  609654 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 16:16:48.703932  609654 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 16:16:48.704033  609654 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/no-preload-534842/config.json ...
	I1202 16:16:48.704214  609654 cache.go:107] acquiring lock: {Name:mk6b8eeb5270fa67a5a87f892f37de1ae4805f75 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:16:48.704230  609654 cache.go:107] acquiring lock: {Name:mk821cef64e8468a2739d03d2e1019ac980bf2cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:16:48.704351  609654 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 16:16:48.704376  609654 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 183.092µs
	I1202 16:16:48.704369  609654 cache.go:107] acquiring lock: {Name:mkce5d795e0ca01a9ee3d674d001cd6e04bbbfba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:16:48.704397  609654 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 16:16:48.704337  609654 cache.go:107] acquiring lock: {Name:mk17b77bf762047097cbe060b18dc85ae78a9727 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:16:48.704405  609654 cache.go:107] acquiring lock: {Name:mk91bc91bcc535b3edd8200bf0c06e4d97781487 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:16:48.704450  609654 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1202 16:16:48.704461  609654 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 93.524µs
	I1202 16:16:48.704476  609654 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 16:16:48.704378  609654 cache.go:107] acquiring lock: {Name:mk3f4d40fdf359ce0573637a386f14c0a310cdc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:16:48.704479  609654 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 16:16:48.704513  609654 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 16:16:48.704511  609654 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 108.577µs
	I1202 16:16:48.704522  609654 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 16:16:48.704520  609654 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 146.53µs
	I1202 16:16:48.704510  609654 cache.go:107] acquiring lock: {Name:mka2aa325920dfb2720f9036278856e8dac95446 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:16:48.704337  609654 cache.go:107] acquiring lock: {Name:mkec45cdfdbdafc0ef1296b9d77662a50add1cdf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:16:48.704530  609654 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 16:16:48.704478  609654 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 16:16:48.704488  609654 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 160.017µs
	I1202 16:16:48.704572  609654 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 16:16:48.704577  609654 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 16:16:48.704586  609654 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 374.291µs
	I1202 16:16:48.704594  609654 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 16:16:48.704601  609654 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 16:16:48.704613  609654 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 16:16:48.704611  609654 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 105.545µs
	I1202 16:16:48.704626  609654 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 16:16:48.704626  609654 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 297.169µs
	I1202 16:16:48.704637  609654 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 16:16:48.704682  609654 cache.go:87] Successfully saved all images to host disk.
	I1202 16:16:48.732785  609654 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 16:16:48.732810  609654 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 16:16:48.732832  609654 cache.go:243] Successfully downloaded all kic artifacts
	I1202 16:16:48.732876  609654 start.go:360] acquireMachinesLock for no-preload-534842: {Name:mkaeda205abee8b126ec700e1149a8c091541425 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:16:48.732949  609654 start.go:364] duration metric: took 53.28µs to acquireMachinesLock for "no-preload-534842"
	I1202 16:16:48.732971  609654 start.go:96] Skipping create...Using existing machine configuration
	I1202 16:16:48.732978  609654 fix.go:54] fixHost starting: 
	I1202 16:16:48.733262  609654 cli_runner.go:164] Run: docker container inspect no-preload-534842 --format={{.State.Status}}
	I1202 16:16:48.757509  609654 fix.go:112] recreateIfNeeded on no-preload-534842: state=Stopped err=<nil>
	W1202 16:16:48.757584  609654 fix.go:138] unexpected machine state, will restart: <nil>
	W1202 16:16:46.398664  601673 node_ready.go:57] node "default-k8s-diff-port-806420" has "Ready":"False" status (will retry)
	W1202 16:16:48.899492  601673 node_ready.go:57] node "default-k8s-diff-port-806420" has "Ready":"False" status (will retry)
	I1202 16:16:47.373122  607516 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 16:16:47.373134  607516 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 16:16:47.373176  607516 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-380588"
	I1202 16:16:47.373193  607516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-380588
	W1202 16:16:47.373193  607516 addons.go:248] addon default-storageclass should already be in state true
	I1202 16:16:47.373228  607516 host.go:66] Checking if "old-k8s-version-380588" exists ...
	I1202 16:16:47.373743  607516 cli_runner.go:164] Run: docker container inspect old-k8s-version-380588 --format={{.State.Status}}
	I1202 16:16:47.374483  607516 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1202 16:16:47.374503  607516 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1202 16:16:47.374557  607516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-380588
	I1202 16:16:47.408132  607516 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 16:16:47.408158  607516 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 16:16:47.408219  607516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-380588
	I1202 16:16:47.408367  607516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33240 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/old-k8s-version-380588/id_rsa Username:docker}
	I1202 16:16:47.409084  607516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33240 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/old-k8s-version-380588/id_rsa Username:docker}
	I1202 16:16:47.434621  607516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33240 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/old-k8s-version-380588/id_rsa Username:docker}
	I1202 16:16:47.505722  607516 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 16:16:47.522936  607516 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-380588" to be "Ready" ...
	I1202 16:16:47.532242  607516 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1202 16:16:47.532267  607516 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1202 16:16:47.532837  607516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 16:16:47.549628  607516 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1202 16:16:47.549657  607516 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1202 16:16:47.555615  607516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 16:16:47.568223  607516 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1202 16:16:47.568253  607516 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1202 16:16:47.589160  607516 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1202 16:16:47.589192  607516 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1202 16:16:47.607593  607516 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1202 16:16:47.607646  607516 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1202 16:16:47.627818  607516 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1202 16:16:47.627846  607516 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1202 16:16:47.644526  607516 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1202 16:16:47.644558  607516 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1202 16:16:47.659184  607516 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1202 16:16:47.659214  607516 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1202 16:16:47.676461  607516 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 16:16:47.676492  607516 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1202 16:16:47.691295  607516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 16:16:49.367792  607516 node_ready.go:49] node "old-k8s-version-380588" is "Ready"
	I1202 16:16:49.367828  607516 node_ready.go:38] duration metric: took 1.844841164s for node "old-k8s-version-380588" to be "Ready" ...
	I1202 16:16:49.367845  607516 api_server.go:52] waiting for apiserver process to appear ...
	I1202 16:16:49.367897  607516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 16:16:50.084865  607516 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.551991622s)
	I1202 16:16:50.084934  607516 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.529296166s)
	I1202 16:16:50.450491  607516 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.759132616s)
	I1202 16:16:50.450579  607516 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.082648736s)
	I1202 16:16:50.450611  607516 api_server.go:72] duration metric: took 3.109232935s to wait for apiserver process to appear ...
	I1202 16:16:50.450617  607516 api_server.go:88] waiting for apiserver healthz status ...
	I1202 16:16:50.450640  607516 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1202 16:16:50.452526  607516 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-380588 addons enable metrics-server
	
	I1202 16:16:50.454211  607516 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1202 16:16:50.897800  601673 node_ready.go:49] node "default-k8s-diff-port-806420" is "Ready"
	I1202 16:16:50.897837  601673 node_ready.go:38] duration metric: took 11.503399371s for node "default-k8s-diff-port-806420" to be "Ready" ...
	I1202 16:16:50.897855  601673 api_server.go:52] waiting for apiserver process to appear ...
	I1202 16:16:50.897973  601673 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 16:16:50.913532  601673 api_server.go:72] duration metric: took 11.808468346s to wait for apiserver process to appear ...
	I1202 16:16:50.913567  601673 api_server.go:88] waiting for apiserver healthz status ...
	I1202 16:16:50.913592  601673 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1202 16:16:50.918265  601673 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1202 16:16:50.919692  601673 api_server.go:141] control plane version: v1.34.2
	I1202 16:16:50.919720  601673 api_server.go:131] duration metric: took 6.145345ms to wait for apiserver health ...
	I1202 16:16:50.919731  601673 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 16:16:50.923980  601673 system_pods.go:59] 8 kube-system pods found
	I1202 16:16:50.924035  601673 system_pods.go:61] "coredns-66bc5c9577-6h6nr" [7c832d8c-99dc-4663-a386-c48abaf9335e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 16:16:50.924047  601673 system_pods.go:61] "etcd-default-k8s-diff-port-806420" [e47c28bd-c4ac-417c-92e4-2ed52662c35b] Running
	I1202 16:16:50.924055  601673 system_pods.go:61] "kindnet-pc8st" [17b96563-2832-47ee-9d04-8e27db1a3c6b] Running
	I1202 16:16:50.924079  601673 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-806420" [44c28fe6-dea2-4f64-989d-d69480bc7988] Running
	I1202 16:16:50.924088  601673 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-806420" [6e6342da-debb-4021-8cb1-adec092a866a] Running
	I1202 16:16:50.924097  601673 system_pods.go:61] "kube-proxy-574km" [3766b4e1-7e00-4229-99a3-9eec486a3437] Running
	I1202 16:16:50.924116  601673 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-806420" [14951142-9cb5-4cf8-a095-d45123ec49da] Running
	I1202 16:16:50.924128  601673 system_pods.go:61] "storage-provisioner" [b3d4301c-a3b1-4c90-bb80-045b48b75011] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 16:16:50.924139  601673 system_pods.go:74] duration metric: took 4.400911ms to wait for pod list to return data ...
	I1202 16:16:50.924152  601673 default_sa.go:34] waiting for default service account to be created ...
	I1202 16:16:50.927078  601673 default_sa.go:45] found service account: "default"
	I1202 16:16:50.927100  601673 default_sa.go:55] duration metric: took 2.939101ms for default service account to be created ...
	I1202 16:16:50.927109  601673 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 16:16:50.929969  601673 system_pods.go:86] 8 kube-system pods found
	I1202 16:16:50.929996  601673 system_pods.go:89] "coredns-66bc5c9577-6h6nr" [7c832d8c-99dc-4663-a386-c48abaf9335e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 16:16:50.930001  601673 system_pods.go:89] "etcd-default-k8s-diff-port-806420" [e47c28bd-c4ac-417c-92e4-2ed52662c35b] Running
	I1202 16:16:50.930007  601673 system_pods.go:89] "kindnet-pc8st" [17b96563-2832-47ee-9d04-8e27db1a3c6b] Running
	I1202 16:16:50.930011  601673 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-806420" [44c28fe6-dea2-4f64-989d-d69480bc7988] Running
	I1202 16:16:50.930015  601673 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-806420" [6e6342da-debb-4021-8cb1-adec092a866a] Running
	I1202 16:16:50.930019  601673 system_pods.go:89] "kube-proxy-574km" [3766b4e1-7e00-4229-99a3-9eec486a3437] Running
	I1202 16:16:50.930022  601673 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-806420" [14951142-9cb5-4cf8-a095-d45123ec49da] Running
	I1202 16:16:50.930029  601673 system_pods.go:89] "storage-provisioner" [b3d4301c-a3b1-4c90-bb80-045b48b75011] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 16:16:50.930063  601673 retry.go:31] will retry after 275.463128ms: missing components: kube-dns
	I1202 16:16:51.210416  601673 system_pods.go:86] 8 kube-system pods found
	I1202 16:16:51.210481  601673 system_pods.go:89] "coredns-66bc5c9577-6h6nr" [7c832d8c-99dc-4663-a386-c48abaf9335e] Running
	I1202 16:16:51.210491  601673 system_pods.go:89] "etcd-default-k8s-diff-port-806420" [e47c28bd-c4ac-417c-92e4-2ed52662c35b] Running
	I1202 16:16:51.210501  601673 system_pods.go:89] "kindnet-pc8st" [17b96563-2832-47ee-9d04-8e27db1a3c6b] Running
	I1202 16:16:51.210507  601673 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-806420" [44c28fe6-dea2-4f64-989d-d69480bc7988] Running
	I1202 16:16:51.210515  601673 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-806420" [6e6342da-debb-4021-8cb1-adec092a866a] Running
	I1202 16:16:51.210521  601673 system_pods.go:89] "kube-proxy-574km" [3766b4e1-7e00-4229-99a3-9eec486a3437] Running
	I1202 16:16:51.210528  601673 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-806420" [14951142-9cb5-4cf8-a095-d45123ec49da] Running
	I1202 16:16:51.210535  601673 system_pods.go:89] "storage-provisioner" [b3d4301c-a3b1-4c90-bb80-045b48b75011] Running
	I1202 16:16:51.210547  601673 system_pods.go:126] duration metric: took 283.431625ms to wait for k8s-apps to be running ...
	I1202 16:16:51.210564  601673 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 16:16:51.210631  601673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 16:16:51.224133  601673 system_svc.go:56] duration metric: took 13.558472ms WaitForService to wait for kubelet
	I1202 16:16:51.224167  601673 kubeadm.go:587] duration metric: took 12.119111661s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 16:16:51.224189  601673 node_conditions.go:102] verifying NodePressure condition ...
	I1202 16:16:51.227262  601673 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 16:16:51.227289  601673 node_conditions.go:123] node cpu capacity is 8
	I1202 16:16:51.227306  601673 node_conditions.go:105] duration metric: took 3.11092ms to run NodePressure ...
	I1202 16:16:51.227321  601673 start.go:242] waiting for startup goroutines ...
	I1202 16:16:51.227332  601673 start.go:247] waiting for cluster config update ...
	I1202 16:16:51.227348  601673 start.go:256] writing updated cluster config ...
	I1202 16:16:51.227668  601673 ssh_runner.go:195] Run: rm -f paused
	I1202 16:16:51.231477  601673 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 16:16:51.235195  601673 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6h6nr" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:16:51.239606  601673 pod_ready.go:94] pod "coredns-66bc5c9577-6h6nr" is "Ready"
	I1202 16:16:51.239632  601673 pod_ready.go:86] duration metric: took 4.41532ms for pod "coredns-66bc5c9577-6h6nr" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:16:51.241799  601673 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-806420" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:16:51.245668  601673 pod_ready.go:94] pod "etcd-default-k8s-diff-port-806420" is "Ready"
	I1202 16:16:51.245694  601673 pod_ready.go:86] duration metric: took 3.871864ms for pod "etcd-default-k8s-diff-port-806420" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:16:51.247689  601673 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-806420" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:16:51.251364  601673 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-806420" is "Ready"
	I1202 16:16:51.251383  601673 pod_ready.go:86] duration metric: took 3.67643ms for pod "kube-apiserver-default-k8s-diff-port-806420" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:16:51.253305  601673 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-806420" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:16:51.635835  601673 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-806420" is "Ready"
	I1202 16:16:51.635867  601673 pod_ready.go:86] duration metric: took 382.541932ms for pod "kube-controller-manager-default-k8s-diff-port-806420" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:16:51.836538  601673 pod_ready.go:83] waiting for pod "kube-proxy-574km" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:16:52.236229  601673 pod_ready.go:94] pod "kube-proxy-574km" is "Ready"
	I1202 16:16:52.236255  601673 pod_ready.go:86] duration metric: took 399.693213ms for pod "kube-proxy-574km" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:16:52.437030  601673 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-806420" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:16:52.836314  601673 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-806420" is "Ready"
	I1202 16:16:52.836348  601673 pod_ready.go:86] duration metric: took 399.28942ms for pod "kube-scheduler-default-k8s-diff-port-806420" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:16:52.836361  601673 pod_ready.go:40] duration metric: took 1.604860526s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 16:16:52.884069  601673 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1202 16:16:52.885811  601673 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-806420" cluster and "default" namespace by default
	I1202 16:16:48.759589  609654 out.go:252] * Restarting existing docker container for "no-preload-534842" ...
	I1202 16:16:48.759682  609654 cli_runner.go:164] Run: docker start no-preload-534842
	I1202 16:16:49.058055  609654 cli_runner.go:164] Run: docker container inspect no-preload-534842 --format={{.State.Status}}
	I1202 16:16:49.079359  609654 kic.go:430] container "no-preload-534842" state is running.
	I1202 16:16:49.079821  609654 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-534842
	I1202 16:16:49.109879  609654 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/no-preload-534842/config.json ...
	I1202 16:16:49.110163  609654 machine.go:94] provisionDockerMachine start ...
	I1202 16:16:49.110248  609654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-534842
	I1202 16:16:49.132511  609654 main.go:143] libmachine: Using SSH client type: native
	I1202 16:16:49.132850  609654 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33245 <nil> <nil>}
	I1202 16:16:49.132870  609654 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 16:16:49.133759  609654 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38706->127.0.0.1:33245: read: connection reset by peer
	I1202 16:16:52.275690  609654 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-534842
	
	I1202 16:16:52.275726  609654 ubuntu.go:182] provisioning hostname "no-preload-534842"
	I1202 16:16:52.275785  609654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-534842
	I1202 16:16:52.295206  609654 main.go:143] libmachine: Using SSH client type: native
	I1202 16:16:52.295445  609654 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33245 <nil> <nil>}
	I1202 16:16:52.295467  609654 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-534842 && echo "no-preload-534842" | sudo tee /etc/hostname
	I1202 16:16:52.447590  609654 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-534842
	
	I1202 16:16:52.447685  609654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-534842
	I1202 16:16:52.465844  609654 main.go:143] libmachine: Using SSH client type: native
	I1202 16:16:52.466126  609654 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33245 <nil> <nil>}
	I1202 16:16:52.466145  609654 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-534842' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-534842/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-534842' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 16:16:52.608395  609654 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 16:16:52.608444  609654 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-264555/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-264555/.minikube}
	I1202 16:16:52.608499  609654 ubuntu.go:190] setting up certificates
	I1202 16:16:52.608512  609654 provision.go:84] configureAuth start
	I1202 16:16:52.608575  609654 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-534842
	I1202 16:16:52.628587  609654 provision.go:143] copyHostCerts
	I1202 16:16:52.628647  609654 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem, removing ...
	I1202 16:16:52.628679  609654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem
	I1202 16:16:52.628749  609654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem (1082 bytes)
	I1202 16:16:52.628854  609654 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem, removing ...
	I1202 16:16:52.628864  609654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem
	I1202 16:16:52.628892  609654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem (1123 bytes)
	I1202 16:16:52.628953  609654 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem, removing ...
	I1202 16:16:52.628960  609654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem
	I1202 16:16:52.628982  609654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem (1675 bytes)
	I1202 16:16:52.629033  609654 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem org=jenkins.no-preload-534842 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-534842]
	I1202 16:16:52.736299  609654 provision.go:177] copyRemoteCerts
	I1202 16:16:52.736365  609654 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 16:16:52.736402  609654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-534842
	I1202 16:16:52.754821  609654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33245 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/no-preload-534842/id_rsa Username:docker}
	I1202 16:16:52.856864  609654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 16:16:52.876063  609654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 16:16:52.895859  609654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1202 16:16:52.917681  609654 provision.go:87] duration metric: took 309.151106ms to configureAuth
	I1202 16:16:52.917713  609654 ubuntu.go:206] setting minikube options for container-runtime
	I1202 16:16:52.917948  609654 config.go:182] Loaded profile config "no-preload-534842": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 16:16:52.918065  609654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-534842
	I1202 16:16:52.939470  609654 main.go:143] libmachine: Using SSH client type: native
	I1202 16:16:52.939783  609654 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33245 <nil> <nil>}
	I1202 16:16:52.939817  609654 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 16:16:53.292084  609654 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 16:16:53.292119  609654 machine.go:97] duration metric: took 4.181935199s to provisionDockerMachine
	I1202 16:16:53.292134  609654 start.go:293] postStartSetup for "no-preload-534842" (driver="docker")
	I1202 16:16:53.292151  609654 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 16:16:53.292217  609654 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 16:16:53.292268  609654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-534842
	I1202 16:16:53.314292  609654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33245 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/no-preload-534842/id_rsa Username:docker}
	I1202 16:16:53.420588  609654 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 16:16:53.424747  609654 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 16:16:53.424780  609654 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 16:16:53.424793  609654 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-264555/.minikube/addons for local assets ...
	I1202 16:16:53.424848  609654 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-264555/.minikube/files for local assets ...
	I1202 16:16:53.424919  609654 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem -> 2680992.pem in /etc/ssl/certs
	I1202 16:16:53.425013  609654 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 16:16:53.434131  609654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem --> /etc/ssl/certs/2680992.pem (1708 bytes)
	I1202 16:16:53.456543  609654 start.go:296] duration metric: took 164.391677ms for postStartSetup
	I1202 16:16:53.456652  609654 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 16:16:53.456710  609654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-534842
	I1202 16:16:53.476607  609654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33245 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/no-preload-534842/id_rsa Username:docker}
	I1202 16:16:53.580113  609654 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 16:16:53.584940  609654 fix.go:56] duration metric: took 4.851956324s for fixHost
	I1202 16:16:53.584962  609654 start.go:83] releasing machines lock for "no-preload-534842", held for 4.852003258s
	I1202 16:16:53.585018  609654 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-534842
	I1202 16:16:53.605361  609654 ssh_runner.go:195] Run: cat /version.json
	I1202 16:16:53.605443  609654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-534842
	I1202 16:16:53.605445  609654 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 16:16:53.605523  609654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-534842
	I1202 16:16:53.628023  609654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33245 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/no-preload-534842/id_rsa Username:docker}
	I1202 16:16:53.628206  609654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33245 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/no-preload-534842/id_rsa Username:docker}
	I1202 16:16:53.733035  609654 ssh_runner.go:195] Run: systemctl --version
	I1202 16:16:53.805159  609654 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 16:16:53.845404  609654 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 16:16:53.852150  609654 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 16:16:53.852267  609654 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 16:16:53.863351  609654 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 16:16:53.863378  609654 start.go:496] detecting cgroup driver to use...
	I1202 16:16:53.863416  609654 detect.go:190] detected "systemd" cgroup driver on host os
	I1202 16:16:53.863486  609654 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 16:16:53.880464  609654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 16:16:53.894670  609654 docker.go:218] disabling cri-docker service (if available) ...
	I1202 16:16:53.894754  609654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 16:16:53.912914  609654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 16:16:53.926898  609654 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 16:16:54.021932  609654 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 16:16:54.117211  609654 docker.go:234] disabling docker service ...
	I1202 16:16:54.117274  609654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 16:16:54.132786  609654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 16:16:54.148507  609654 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 16:16:54.241762  609654 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 16:16:54.334324  609654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 16:16:54.348196  609654 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 16:16:54.364628  609654 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 16:16:54.364691  609654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:16:54.374331  609654 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1202 16:16:54.374404  609654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:16:54.384519  609654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:16:54.394991  609654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:16:54.404261  609654 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 16:16:54.413838  609654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:16:54.424787  609654 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:16:54.433892  609654 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:16:54.443734  609654 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 16:16:54.452956  609654 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 16:16:54.461301  609654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 16:16:54.557524  609654 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 16:16:54.709995  609654 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 16:16:54.710065  609654 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 16:16:54.714116  609654 start.go:564] Will wait 60s for crictl version
	I1202 16:16:54.714186  609654 ssh_runner.go:195] Run: which crictl
	I1202 16:16:54.718007  609654 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 16:16:54.745154  609654 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 16:16:54.745235  609654 ssh_runner.go:195] Run: crio --version
	I1202 16:16:54.775549  609654 ssh_runner.go:195] Run: crio --version
	I1202 16:16:54.810748  609654 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1202 16:16:50.455622  607516 addons.go:530] duration metric: took 3.114210081s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1202 16:16:50.456296  607516 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1202 16:16:50.456330  607516 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1202 16:16:50.951597  607516 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1202 16:16:50.957528  607516 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1202 16:16:50.958924  607516 api_server.go:141] control plane version: v1.28.0
	I1202 16:16:50.958964  607516 api_server.go:131] duration metric: took 508.340341ms to wait for apiserver health ...
	I1202 16:16:50.958972  607516 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 16:16:50.963612  607516 system_pods.go:59] 8 kube-system pods found
	I1202 16:16:50.963653  607516 system_pods.go:61] "coredns-5dd5756b68-fsfh2" [b7a09569-0c93-481f-9bf0-4c943f83bcb2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 16:16:50.963665  607516 system_pods.go:61] "etcd-old-k8s-version-380588" [aff7505d-70ab-4273-8637-a5daabdab20a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 16:16:50.963687  607516 system_pods.go:61] "kindnet-cd4m6" [b00824ca-1af5-4aa6-b0a8-09f83c30bf49] Running
	I1202 16:16:50.963717  607516 system_pods.go:61] "kube-apiserver-old-k8s-version-380588" [795c77d5-8f84-434a-9ec8-a88941f79dac] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 16:16:50.963735  607516 system_pods.go:61] "kube-controller-manager-old-k8s-version-380588" [13868431-952c-45e1-9b5c-9410a2e7123d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 16:16:50.963745  607516 system_pods.go:61] "kube-proxy-jqstm" [c32e74d7-f05f-4cbc-940e-bf5ce7f65de8] Running
	I1202 16:16:50.963754  607516 system_pods.go:61] "kube-scheduler-old-k8s-version-380588" [dd55e6b3-16b5-4fdc-a35b-6e38c0cf2dd5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 16:16:50.963768  607516 system_pods.go:61] "storage-provisioner" [de6d872c-38c7-4bfa-a997-52fcc9c64976] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 16:16:50.963778  607516 system_pods.go:74] duration metric: took 4.798628ms to wait for pod list to return data ...
	I1202 16:16:50.963793  607516 default_sa.go:34] waiting for default service account to be created ...
	I1202 16:16:50.966524  607516 default_sa.go:45] found service account: "default"
	I1202 16:16:50.966548  607516 default_sa.go:55] duration metric: took 2.747525ms for default service account to be created ...
	I1202 16:16:50.966560  607516 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 16:16:50.970641  607516 system_pods.go:86] 8 kube-system pods found
	I1202 16:16:50.970688  607516 system_pods.go:89] "coredns-5dd5756b68-fsfh2" [b7a09569-0c93-481f-9bf0-4c943f83bcb2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 16:16:50.970703  607516 system_pods.go:89] "etcd-old-k8s-version-380588" [aff7505d-70ab-4273-8637-a5daabdab20a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 16:16:50.970715  607516 system_pods.go:89] "kindnet-cd4m6" [b00824ca-1af5-4aa6-b0a8-09f83c30bf49] Running
	I1202 16:16:50.970725  607516 system_pods.go:89] "kube-apiserver-old-k8s-version-380588" [795c77d5-8f84-434a-9ec8-a88941f79dac] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 16:16:50.970735  607516 system_pods.go:89] "kube-controller-manager-old-k8s-version-380588" [13868431-952c-45e1-9b5c-9410a2e7123d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 16:16:50.970742  607516 system_pods.go:89] "kube-proxy-jqstm" [c32e74d7-f05f-4cbc-940e-bf5ce7f65de8] Running
	I1202 16:16:50.970802  607516 system_pods.go:89] "kube-scheduler-old-k8s-version-380588" [dd55e6b3-16b5-4fdc-a35b-6e38c0cf2dd5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 16:16:50.970820  607516 system_pods.go:89] "storage-provisioner" [de6d872c-38c7-4bfa-a997-52fcc9c64976] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 16:16:50.970830  607516 system_pods.go:126] duration metric: took 4.261656ms to wait for k8s-apps to be running ...
	I1202 16:16:50.970840  607516 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 16:16:50.970898  607516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 16:16:50.986227  607516 system_svc.go:56] duration metric: took 15.375333ms WaitForService to wait for kubelet
	I1202 16:16:50.986261  607516 kubeadm.go:587] duration metric: took 3.644881423s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 16:16:50.986286  607516 node_conditions.go:102] verifying NodePressure condition ...
	I1202 16:16:50.990401  607516 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 16:16:50.990491  607516 node_conditions.go:123] node cpu capacity is 8
	I1202 16:16:50.990546  607516 node_conditions.go:105] duration metric: took 4.253233ms to run NodePressure ...
	I1202 16:16:50.990574  607516 start.go:242] waiting for startup goroutines ...
	I1202 16:16:50.990592  607516 start.go:247] waiting for cluster config update ...
	I1202 16:16:50.990622  607516 start.go:256] writing updated cluster config ...
	I1202 16:16:50.990945  607516 ssh_runner.go:195] Run: rm -f paused
	I1202 16:16:50.997187  607516 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 16:16:51.003495  607516 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-fsfh2" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 16:16:53.011469  607516 pod_ready.go:104] pod "coredns-5dd5756b68-fsfh2" is not "Ready", error: <nil>
	W1202 16:16:55.011968  607516 pod_ready.go:104] pod "coredns-5dd5756b68-fsfh2" is not "Ready", error: <nil>
	I1202 16:16:54.812233  609654 cli_runner.go:164] Run: docker network inspect no-preload-534842 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 16:16:54.835002  609654 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1202 16:16:54.839662  609654 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 16:16:54.850586  609654 kubeadm.go:884] updating cluster {Name:no-preload-534842 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-534842 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 16:16:54.850733  609654 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 16:16:54.850784  609654 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 16:16:54.885794  609654 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 16:16:54.885818  609654 cache_images.go:86] Images are preloaded, skipping loading
	I1202 16:16:54.885833  609654 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-beta.0 crio true true} ...
	I1202 16:16:54.885941  609654 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-534842 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-534842 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 16:16:54.886025  609654 ssh_runner.go:195] Run: crio config
	I1202 16:16:54.952227  609654 cni.go:84] Creating CNI manager for ""
	I1202 16:16:54.952251  609654 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 16:16:54.952312  609654 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 16:16:54.952346  609654 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-534842 NodeName:no-preload-534842 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 16:16:54.952523  609654 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-534842"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 16:16:54.952616  609654 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 16:16:54.961869  609654 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 16:16:54.961928  609654 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 16:16:54.970410  609654 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1202 16:16:54.985386  609654 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 16:16:55.000715  609654 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1202 16:16:55.017312  609654 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1202 16:16:55.021601  609654 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 16:16:55.033114  609654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 16:16:55.139862  609654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 16:16:55.172215  609654 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/no-preload-534842 for IP: 192.168.94.2
	I1202 16:16:55.172347  609654 certs.go:195] generating shared ca certs ...
	I1202 16:16:55.172371  609654 certs.go:227] acquiring lock for ca certs: {Name:mk039ff27816ff98157f54038cc23b17e408fc34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:16:55.172559  609654 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-264555/.minikube/ca.key
	I1202 16:16:55.172687  609654 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.key
	I1202 16:16:55.172705  609654 certs.go:257] generating profile certs ...
	I1202 16:16:55.172840  609654 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/no-preload-534842/client.key
	I1202 16:16:55.172914  609654 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/no-preload-534842/apiserver.key.70a91745
	I1202 16:16:55.172961  609654 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/no-preload-534842/proxy-client.key
	I1202 16:16:55.173092  609654 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099.pem (1338 bytes)
	W1202 16:16:55.173143  609654 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099_empty.pem, impossibly tiny 0 bytes
	I1202 16:16:55.173153  609654 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem (1679 bytes)
	I1202 16:16:55.173184  609654 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem (1082 bytes)
	I1202 16:16:55.173214  609654 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem (1123 bytes)
	I1202 16:16:55.173257  609654 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem (1675 bytes)
	I1202 16:16:55.173324  609654 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem (1708 bytes)
	I1202 16:16:55.174090  609654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 16:16:55.197176  609654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 16:16:55.220493  609654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 16:16:55.245136  609654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 16:16:55.278757  609654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/no-preload-534842/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 16:16:55.303613  609654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/no-preload-534842/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 16:16:55.326671  609654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/no-preload-534842/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 16:16:55.350315  609654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/no-preload-534842/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 16:16:55.373307  609654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem --> /usr/share/ca-certificates/2680992.pem (1708 bytes)
	I1202 16:16:55.399057  609654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 16:16:55.420714  609654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099.pem --> /usr/share/ca-certificates/268099.pem (1338 bytes)
	I1202 16:16:55.440499  609654 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 16:16:55.456525  609654 ssh_runner.go:195] Run: openssl version
	I1202 16:16:55.465086  609654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2680992.pem && ln -fs /usr/share/ca-certificates/2680992.pem /etc/ssl/certs/2680992.pem"
	I1202 16:16:55.476011  609654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2680992.pem
	I1202 16:16:55.480079  609654 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 15:33 /usr/share/ca-certificates/2680992.pem
	I1202 16:16:55.480146  609654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2680992.pem
	I1202 16:16:55.520493  609654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2680992.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 16:16:55.529365  609654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 16:16:55.538513  609654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:16:55.542485  609654 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 15:16 /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:16:55.542540  609654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:16:55.597224  609654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 16:16:55.606897  609654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/268099.pem && ln -fs /usr/share/ca-certificates/268099.pem /etc/ssl/certs/268099.pem"
	I1202 16:16:55.617205  609654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/268099.pem
	I1202 16:16:55.621401  609654 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 15:33 /usr/share/ca-certificates/268099.pem
	I1202 16:16:55.621484  609654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/268099.pem
	I1202 16:16:55.663313  609654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/268099.pem /etc/ssl/certs/51391683.0"
	I1202 16:16:55.671922  609654 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 16:16:55.676017  609654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 16:16:55.718445  609654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 16:16:55.764851  609654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 16:16:55.811938  609654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 16:16:55.865071  609654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 16:16:55.904615  609654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 16:16:55.941922  609654 kubeadm.go:401] StartCluster: {Name:no-preload-534842 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-534842 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Moun
tUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 16:16:55.942028  609654 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 16:16:55.942087  609654 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 16:16:55.974509  609654 cri.go:89] found id: "ef4d71f3dba7f249c2dccfb9492705acceca27d92b988ad3f3be8ddf967a2524"
	I1202 16:16:55.974539  609654 cri.go:89] found id: "7f5c2cae2aa291edcbbe0f927b622ca7853d0323468ef1d4662a47fc47dab2a7"
	I1202 16:16:55.974545  609654 cri.go:89] found id: "44a6ec8649ccbb15298488aba888279a5c30ed43f97b8e65953b50f4199a5f54"
	I1202 16:16:55.974549  609654 cri.go:89] found id: "ec6d57760ee61c8da2007c23b76750466cdaa245ef7a003ac8ccc74510f7bd2e"
	I1202 16:16:55.974553  609654 cri.go:89] found id: ""
	I1202 16:16:55.974607  609654 ssh_runner.go:195] Run: sudo runc list -f json
	W1202 16:16:55.988886  609654 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T16:16:55Z" level=error msg="open /run/runc: no such file or directory"
	I1202 16:16:55.988957  609654 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 16:16:55.997628  609654 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 16:16:55.997670  609654 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 16:16:55.997727  609654 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 16:16:56.006219  609654 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 16:16:56.007604  609654 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-534842" does not appear in /home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 16:16:56.008572  609654 kubeconfig.go:62] /home/jenkins/minikube-integration/22021-264555/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-534842" cluster setting kubeconfig missing "no-preload-534842" context setting]
	I1202 16:16:56.010018  609654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/kubeconfig: {Name:mk809d3f43352510256b48d000241cc8ee13f80d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:16:56.012199  609654 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 16:16:56.022473  609654 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1202 16:16:56.022511  609654 kubeadm.go:602] duration metric: took 24.833352ms to restartPrimaryControlPlane
	I1202 16:16:56.022523  609654 kubeadm.go:403] duration metric: took 80.620527ms to StartCluster
	I1202 16:16:56.022544  609654 settings.go:142] acquiring lock: {Name:mkb00b5395affa5a80ee09f21cfed53b1afcd59c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:16:56.022627  609654 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 16:16:56.025061  609654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/kubeconfig: {Name:mk809d3f43352510256b48d000241cc8ee13f80d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:16:56.025384  609654 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 16:16:56.025466  609654 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 16:16:56.025580  609654 addons.go:70] Setting storage-provisioner=true in profile "no-preload-534842"
	I1202 16:16:56.025599  609654 config.go:182] Loaded profile config "no-preload-534842": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 16:16:56.025614  609654 addons.go:239] Setting addon storage-provisioner=true in "no-preload-534842"
	W1202 16:16:56.025623  609654 addons.go:248] addon storage-provisioner should already be in state true
	I1202 16:16:56.025647  609654 addons.go:70] Setting dashboard=true in profile "no-preload-534842"
	I1202 16:16:56.025664  609654 host.go:66] Checking if "no-preload-534842" exists ...
	I1202 16:16:56.025670  609654 addons.go:239] Setting addon dashboard=true in "no-preload-534842"
	W1202 16:16:56.025684  609654 addons.go:248] addon dashboard should already be in state true
	I1202 16:16:56.025707  609654 host.go:66] Checking if "no-preload-534842" exists ...
	I1202 16:16:56.026153  609654 cli_runner.go:164] Run: docker container inspect no-preload-534842 --format={{.State.Status}}
	I1202 16:16:56.026158  609654 cli_runner.go:164] Run: docker container inspect no-preload-534842 --format={{.State.Status}}
	I1202 16:16:56.026325  609654 addons.go:70] Setting default-storageclass=true in profile "no-preload-534842"
	I1202 16:16:56.026350  609654 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-534842"
	I1202 16:16:56.026664  609654 cli_runner.go:164] Run: docker container inspect no-preload-534842 --format={{.State.Status}}
	I1202 16:16:56.027819  609654 out.go:179] * Verifying Kubernetes components...
	I1202 16:16:56.029368  609654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 16:16:56.058976  609654 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 16:16:56.058994  609654 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1202 16:16:56.060152  609654 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 16:16:56.060173  609654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 16:16:56.060189  609654 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1202 16:16:56.060240  609654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-534842
	I1202 16:16:56.061146  609654 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1202 16:16:56.061161  609654 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1202 16:16:56.061223  609654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-534842
	I1202 16:16:56.068915  609654 addons.go:239] Setting addon default-storageclass=true in "no-preload-534842"
	W1202 16:16:56.068944  609654 addons.go:248] addon default-storageclass should already be in state true
	I1202 16:16:56.068976  609654 host.go:66] Checking if "no-preload-534842" exists ...
	I1202 16:16:56.069455  609654 cli_runner.go:164] Run: docker container inspect no-preload-534842 --format={{.State.Status}}
	I1202 16:16:56.108448  609654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33245 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/no-preload-534842/id_rsa Username:docker}
	I1202 16:16:56.112043  609654 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 16:16:56.112317  609654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33245 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/no-preload-534842/id_rsa Username:docker}
	I1202 16:16:56.112066  609654 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 16:16:56.113482  609654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-534842
	I1202 16:16:56.142646  609654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33245 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/no-preload-534842/id_rsa Username:docker}
	I1202 16:16:56.220877  609654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 16:16:56.239767  609654 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 16:16:56.239911  609654 node_ready.go:35] waiting up to 6m0s for node "no-preload-534842" to be "Ready" ...
	I1202 16:16:56.244843  609654 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1202 16:16:56.244868  609654 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1202 16:16:56.265700  609654 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1202 16:16:56.265732  609654 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1202 16:16:56.274888  609654 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 16:16:56.288800  609654 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1202 16:16:56.288834  609654 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1202 16:16:56.317078  609654 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1202 16:16:56.317104  609654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1202 16:16:56.336513  609654 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1202 16:16:56.336540  609654 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1202 16:16:56.354230  609654 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1202 16:16:56.354258  609654 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1202 16:16:56.371791  609654 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1202 16:16:56.371819  609654 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1202 16:16:56.387574  609654 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1202 16:16:56.387601  609654 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1202 16:16:56.402310  609654 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 16:16:56.402362  609654 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1202 16:16:56.418201  609654 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 16:16:57.166173  609654 node_ready.go:49] node "no-preload-534842" is "Ready"
	I1202 16:16:57.166212  609654 node_ready.go:38] duration metric: took 926.252594ms for node "no-preload-534842" to be "Ready" ...
	I1202 16:16:57.166231  609654 api_server.go:52] waiting for apiserver process to appear ...
	I1202 16:16:57.166292  609654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 16:16:57.760321  609654 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.520504726s)
	I1202 16:16:57.760368  609654 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.485438511s)
	I1202 16:16:57.760495  609654 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.342251241s)
	I1202 16:16:57.760592  609654 api_server.go:72] duration metric: took 1.735169763s to wait for apiserver process to appear ...
	I1202 16:16:57.760628  609654 api_server.go:88] waiting for apiserver healthz status ...
	I1202 16:16:57.760651  609654 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1202 16:16:57.762574  609654 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-534842 addons enable metrics-server
	
	I1202 16:16:57.765996  609654 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 16:16:57.766023  609654 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 16:16:57.770325  609654 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1202 16:16:57.771501  609654 addons.go:530] duration metric: took 1.746036615s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1202 16:16:58.261564  609654 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1202 16:16:58.265592  609654 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 16:16:58.265619  609654 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 16:16:57.511044  607516 pod_ready.go:104] pod "coredns-5dd5756b68-fsfh2" is not "Ready", error: <nil>
	W1202 16:17:00.009999  607516 pod_ready.go:104] pod "coredns-5dd5756b68-fsfh2" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 02 16:16:50 default-k8s-diff-port-806420 crio[775]: time="2025-12-02T16:16:50.950147127Z" level=info msg="Starting container: 87fb0c684e0a4e372606dfc0978ef96f321c21088372606080e58f3a3794f371" id=f998eb34-3e01-420f-b40b-056e74fae750 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 16:16:50 default-k8s-diff-port-806420 crio[775]: time="2025-12-02T16:16:50.952173952Z" level=info msg="Started container" PID=1825 containerID=87fb0c684e0a4e372606dfc0978ef96f321c21088372606080e58f3a3794f371 description=kube-system/coredns-66bc5c9577-6h6nr/coredns id=f998eb34-3e01-420f-b40b-056e74fae750 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b8f2aa847835659bd1b242dacfc921a81053e0c8d893ede3e0bb0b0812106328
	Dec 02 16:16:53 default-k8s-diff-port-806420 crio[775]: time="2025-12-02T16:16:53.380654231Z" level=info msg="Running pod sandbox: default/busybox/POD" id=63605e63-9012-46e5-97e7-37e5e06bf7fd name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 16:16:53 default-k8s-diff-port-806420 crio[775]: time="2025-12-02T16:16:53.380721493Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:16:53 default-k8s-diff-port-806420 crio[775]: time="2025-12-02T16:16:53.38603835Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:fab98a272e8c27ede12b995d87c60af75e97a5843acef94405a7b3a2db2a479c UID:5fb97362-c18a-4a19-bcc3-d79520c4276f NetNS:/var/run/netns/304d383a-59ba-4e43-9eac-072078dce4ff Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0004cc470}] Aliases:map[]}"
	Dec 02 16:16:53 default-k8s-diff-port-806420 crio[775]: time="2025-12-02T16:16:53.386080287Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 02 16:16:53 default-k8s-diff-port-806420 crio[775]: time="2025-12-02T16:16:53.3971662Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:fab98a272e8c27ede12b995d87c60af75e97a5843acef94405a7b3a2db2a479c UID:5fb97362-c18a-4a19-bcc3-d79520c4276f NetNS:/var/run/netns/304d383a-59ba-4e43-9eac-072078dce4ff Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0004cc470}] Aliases:map[]}"
	Dec 02 16:16:53 default-k8s-diff-port-806420 crio[775]: time="2025-12-02T16:16:53.397344021Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 02 16:16:53 default-k8s-diff-port-806420 crio[775]: time="2025-12-02T16:16:53.398521072Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 02 16:16:53 default-k8s-diff-port-806420 crio[775]: time="2025-12-02T16:16:53.39974388Z" level=info msg="Ran pod sandbox fab98a272e8c27ede12b995d87c60af75e97a5843acef94405a7b3a2db2a479c with infra container: default/busybox/POD" id=63605e63-9012-46e5-97e7-37e5e06bf7fd name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 16:16:53 default-k8s-diff-port-806420 crio[775]: time="2025-12-02T16:16:53.401042711Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=be7155ce-7202-4d5a-b7e3-632f61c28a67 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:16:53 default-k8s-diff-port-806420 crio[775]: time="2025-12-02T16:16:53.401163886Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=be7155ce-7202-4d5a-b7e3-632f61c28a67 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:16:53 default-k8s-diff-port-806420 crio[775]: time="2025-12-02T16:16:53.401211447Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=be7155ce-7202-4d5a-b7e3-632f61c28a67 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:16:53 default-k8s-diff-port-806420 crio[775]: time="2025-12-02T16:16:53.402127545Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=fa3f8c3a-deca-46b4-a073-30f82747b6bd name=/runtime.v1.ImageService/PullImage
	Dec 02 16:16:53 default-k8s-diff-port-806420 crio[775]: time="2025-12-02T16:16:53.403908541Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 02 16:16:55 default-k8s-diff-port-806420 crio[775]: time="2025-12-02T16:16:55.435141516Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=fa3f8c3a-deca-46b4-a073-30f82747b6bd name=/runtime.v1.ImageService/PullImage
	Dec 02 16:16:55 default-k8s-diff-port-806420 crio[775]: time="2025-12-02T16:16:55.435976525Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e47f1444-51bb-4023-a47b-f290bc3d49b9 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:16:55 default-k8s-diff-port-806420 crio[775]: time="2025-12-02T16:16:55.437590864Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=353a66c5-7e8f-499e-84a5-c0d264638d6b name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:16:55 default-k8s-diff-port-806420 crio[775]: time="2025-12-02T16:16:55.441118181Z" level=info msg="Creating container: default/busybox/busybox" id=12078e4f-1ec8-46c8-a36a-8fa108ad65b2 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:16:55 default-k8s-diff-port-806420 crio[775]: time="2025-12-02T16:16:55.441243888Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:16:55 default-k8s-diff-port-806420 crio[775]: time="2025-12-02T16:16:55.445985761Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:16:55 default-k8s-diff-port-806420 crio[775]: time="2025-12-02T16:16:55.446374623Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:16:55 default-k8s-diff-port-806420 crio[775]: time="2025-12-02T16:16:55.467345734Z" level=info msg="Created container 4f64280653a25fb1852137c66a4bebf5b71726ea1b326d055387f143788acd93: default/busybox/busybox" id=12078e4f-1ec8-46c8-a36a-8fa108ad65b2 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:16:55 default-k8s-diff-port-806420 crio[775]: time="2025-12-02T16:16:55.468084828Z" level=info msg="Starting container: 4f64280653a25fb1852137c66a4bebf5b71726ea1b326d055387f143788acd93" id=ce4a63c5-bc17-4a2e-8385-61ca6fdd8b5d name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 16:16:55 default-k8s-diff-port-806420 crio[775]: time="2025-12-02T16:16:55.469796683Z" level=info msg="Started container" PID=1898 containerID=4f64280653a25fb1852137c66a4bebf5b71726ea1b326d055387f143788acd93 description=default/busybox/busybox id=ce4a63c5-bc17-4a2e-8385-61ca6fdd8b5d name=/runtime.v1.RuntimeService/StartContainer sandboxID=fab98a272e8c27ede12b995d87c60af75e97a5843acef94405a7b3a2db2a479c
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	4f64280653a25       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   fab98a272e8c2       busybox                                                default
	87fb0c684e0a4       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   b8f2aa8478356       coredns-66bc5c9577-6h6nr                               kube-system
	c768697ab5113       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   959322e5ae1aa       storage-provisioner                                    kube-system
	d845dd53a3af1       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      24 seconds ago      Running             kindnet-cni               0                   6a9accc1e5f8f       kindnet-pc8st                                          kube-system
	adadbb07934fb       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                      24 seconds ago      Running             kube-proxy                0                   5b0fbf5d40aec       kube-proxy-574km                                       kube-system
	bb2462bce39e4       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                      34 seconds ago      Running             kube-apiserver            0                   3e9445ff53eff       kube-apiserver-default-k8s-diff-port-806420            kube-system
	86678d102fc40       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                      34 seconds ago      Running             kube-controller-manager   0                   57b49fdf7d333       kube-controller-manager-default-k8s-diff-port-806420   kube-system
	b442c9b373be6       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      34 seconds ago      Running             etcd                      0                   6de82365a51cb       etcd-default-k8s-diff-port-806420                      kube-system
	0426517e59216       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                      34 seconds ago      Running             kube-scheduler            0                   0c38784130b33       kube-scheduler-default-k8s-diff-port-806420            kube-system
	
	
	==> coredns [87fb0c684e0a4e372606dfc0978ef96f321c21088372606080e58f3a3794f371] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47556 - 11427 "HINFO IN 4439274999913816739.7772247589041522960. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.073400541s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-806420
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-806420
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=default-k8s-diff-port-806420
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T16_16_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 16:16:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-806420
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 16:16:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 16:16:50 +0000   Tue, 02 Dec 2025 16:16:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 16:16:50 +0000   Tue, 02 Dec 2025 16:16:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 16:16:50 +0000   Tue, 02 Dec 2025 16:16:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 16:16:50 +0000   Tue, 02 Dec 2025 16:16:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-806420
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                48c4c192-0280-419c-8cb9-032c0b3b12b9
	  Boot ID:                    e00bac56-b076-4861-bc22-5d3b11269f73
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-6h6nr                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-default-k8s-diff-port-806420                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-pc8st                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-default-k8s-diff-port-806420             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-806420    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-574km                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-default-k8s-diff-port-806420             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 31s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s   kubelet          Node default-k8s-diff-port-806420 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s   kubelet          Node default-k8s-diff-port-806420 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s   kubelet          Node default-k8s-diff-port-806420 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node default-k8s-diff-port-806420 event: Registered Node default-k8s-diff-port-806420 in Controller
	  Normal  NodeReady                14s   kubelet          Node default-k8s-diff-port-806420 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000023] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[Dec 2 16:14] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ca bc 15 8e 4f 39 08 06
	[  +0.202375] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4a 25 86 21 45 76 08 06
	[  +7.441346] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 50 97 74 77 f9 08 06
	[  +0.000311] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 8c 8a 4d de f7 08 06
	[Dec 2 16:15] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 87 56 d2 46 1b 08 06
	[  +0.000909] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4a 25 86 21 45 76 08 06
	[  +7.449328] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a 06 ef 04 0a 22 08 06
	[ +17.731920] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff ae 8e 5c 48 83 60 08 06
	[  +2.165442] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0e 0b db fb 54 af 08 06
	[  +0.000320] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 3a 06 ef 04 0a 22 08 06
	[ +14.651928] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 5d 2d 15 78 ec 08 06
	[  +0.000385] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 8e 5c 48 83 60 08 06
	
	
	==> etcd [b442c9b373be6fb72dc310c06166717da1cf4bc37ff97cd672351b785df6b1c7] <==
	{"level":"warn","ts":"2025-12-02T16:16:30.638730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:30.649874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:30.659058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:30.668563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:30.677276Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:30.684965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:30.692740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:30.700035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:30.709357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:30.720675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:30.733799Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:30.742754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:30.752095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:30.762168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:30.771244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:30.781633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:30.789486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:30.796702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:30.804310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:30.813282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:30.836761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:30.840964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:30.849452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:30.856749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:30.909577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55052","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 16:17:04 up  2:59,  0 user,  load average: 3.88, 4.01, 2.60
	Linux default-k8s-diff-port-806420 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d845dd53a3af1a9f6898c092c78f13563ca92f358c78f57f6383ec552e0d74e1] <==
	I1202 16:16:40.143762       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 16:16:40.144046       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1202 16:16:40.144247       1 main.go:148] setting mtu 1500 for CNI 
	I1202 16:16:40.144271       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 16:16:40.144299       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T16:16:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 16:16:40.349715       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 16:16:40.349802       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 16:16:40.349819       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 16:16:40.443370       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1202 16:16:40.750536       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 16:16:40.750569       1 metrics.go:72] Registering metrics
	I1202 16:16:40.750652       1 controller.go:711] "Syncing nftables rules"
	I1202 16:16:50.349981       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1202 16:16:50.350028       1 main.go:301] handling current node
	I1202 16:17:00.352603       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1202 16:17:00.352652       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bb2462bce39e4b32aa5694bba1906ed3bac4da78c73f111c980e1f9993e60cf0] <==
	E1202 16:16:31.492098       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1202 16:16:31.537497       1 controller.go:667] quota admission added evaluator for: namespaces
	I1202 16:16:31.540896       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1202 16:16:31.540992       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 16:16:31.546677       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 16:16:31.547211       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1202 16:16:31.639242       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 16:16:32.340775       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1202 16:16:32.344490       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1202 16:16:32.344503       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1202 16:16:32.820015       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 16:16:32.855465       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 16:16:32.945093       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1202 16:16:32.950798       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1202 16:16:32.951844       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 16:16:32.955605       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 16:16:33.388124       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 16:16:34.095493       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1202 16:16:34.104634       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1202 16:16:34.112825       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1202 16:16:39.194174       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 16:16:39.199508       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 16:16:39.342790       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1202 16:16:39.491170       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1202 16:17:02.166107       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:44940: use of closed network connection
	
	
	==> kube-controller-manager [86678d102fc408bba871fb0f769c399d95f766f6e53f95dc8a12b1b3e3b91bcf] <==
	I1202 16:16:38.347341       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-806420" podCIDRs=["10.244.0.0/24"]
	I1202 16:16:38.354302       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1202 16:16:38.385944       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1202 16:16:38.387137       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1202 16:16:38.387147       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1202 16:16:38.387174       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1202 16:16:38.387183       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1202 16:16:38.387215       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1202 16:16:38.387265       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1202 16:16:38.387308       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1202 16:16:38.387415       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1202 16:16:38.387754       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1202 16:16:38.387880       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1202 16:16:38.388336       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1202 16:16:38.388363       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1202 16:16:38.388431       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1202 16:16:38.388705       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1202 16:16:38.391109       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1202 16:16:38.392520       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 16:16:38.395031       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 16:16:38.406249       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 16:16:38.406272       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1202 16:16:38.406278       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1202 16:16:38.410366       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 16:16:53.322298       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [adadbb07934fbc7db6fea2b7b460396d1bb3be8d7a8f4938976f412d1b65b971] <==
	I1202 16:16:39.918929       1 server_linux.go:53] "Using iptables proxy"
	I1202 16:16:39.989324       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 16:16:40.090411       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 16:16:40.090514       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1202 16:16:40.090625       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 16:16:40.115347       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 16:16:40.115503       1 server_linux.go:132] "Using iptables Proxier"
	I1202 16:16:40.122445       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 16:16:40.123053       1 server.go:527] "Version info" version="v1.34.2"
	I1202 16:16:40.123136       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 16:16:40.128314       1 config.go:200] "Starting service config controller"
	I1202 16:16:40.128351       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 16:16:40.129563       1 config.go:106] "Starting endpoint slice config controller"
	I1202 16:16:40.129647       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 16:16:40.129677       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 16:16:40.129699       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 16:16:40.129801       1 config.go:309] "Starting node config controller"
	I1202 16:16:40.129814       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 16:16:40.129823       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 16:16:40.228589       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 16:16:40.229764       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 16:16:40.229797       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0426517e5921675b8a7138f79e58aa9347eeaa8f20fab24bc91cb0dada6e90a9] <==
	E1202 16:16:31.400834       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1202 16:16:31.400910       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1202 16:16:31.401036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1202 16:16:31.401085       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1202 16:16:31.401148       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1202 16:16:31.401150       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1202 16:16:31.401903       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1202 16:16:31.401984       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1202 16:16:31.402027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1202 16:16:31.401986       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1202 16:16:32.220709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1202 16:16:32.273320       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1202 16:16:32.273320       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1202 16:16:32.316212       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1202 16:16:32.335624       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1202 16:16:32.337848       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1202 16:16:32.346679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1202 16:16:32.454612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1202 16:16:32.514325       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1202 16:16:32.543528       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1202 16:16:32.556805       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1202 16:16:32.577014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1202 16:16:32.578945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1202 16:16:32.720155       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1202 16:16:35.497481       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 02 16:16:34 default-k8s-diff-port-806420 kubelet[1310]: E1202 16:16:34.973144    1310 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-default-k8s-diff-port-806420\" already exists" pod="kube-system/etcd-default-k8s-diff-port-806420"
	Dec 02 16:16:34 default-k8s-diff-port-806420 kubelet[1310]: I1202 16:16:34.991285    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-806420" podStartSLOduration=0.991262976 podStartE2EDuration="991.262976ms" podCreationTimestamp="2025-12-02 16:16:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 16:16:34.990915414 +0000 UTC m=+1.140185036" watchObservedRunningTime="2025-12-02 16:16:34.991262976 +0000 UTC m=+1.140532600"
	Dec 02 16:16:35 default-k8s-diff-port-806420 kubelet[1310]: I1202 16:16:35.010191    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-806420" podStartSLOduration=1.010166629 podStartE2EDuration="1.010166629s" podCreationTimestamp="2025-12-02 16:16:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 16:16:35.000220672 +0000 UTC m=+1.149490320" watchObservedRunningTime="2025-12-02 16:16:35.010166629 +0000 UTC m=+1.159436254"
	Dec 02 16:16:35 default-k8s-diff-port-806420 kubelet[1310]: I1202 16:16:35.020372    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-806420" podStartSLOduration=1.020346941 podStartE2EDuration="1.020346941s" podCreationTimestamp="2025-12-02 16:16:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 16:16:35.010360154 +0000 UTC m=+1.159629780" watchObservedRunningTime="2025-12-02 16:16:35.020346941 +0000 UTC m=+1.169616566"
	Dec 02 16:16:35 default-k8s-diff-port-806420 kubelet[1310]: I1202 16:16:35.020681    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-806420" podStartSLOduration=1.020662544 podStartE2EDuration="1.020662544s" podCreationTimestamp="2025-12-02 16:16:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 16:16:35.020583493 +0000 UTC m=+1.169853117" watchObservedRunningTime="2025-12-02 16:16:35.020662544 +0000 UTC m=+1.169932169"
	Dec 02 16:16:38 default-k8s-diff-port-806420 kubelet[1310]: I1202 16:16:38.360244    1310 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 02 16:16:38 default-k8s-diff-port-806420 kubelet[1310]: I1202 16:16:38.360968    1310 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 02 16:16:39 default-k8s-diff-port-806420 kubelet[1310]: I1202 16:16:39.563782    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3766b4e1-7e00-4229-99a3-9eec486a3437-lib-modules\") pod \"kube-proxy-574km\" (UID: \"3766b4e1-7e00-4229-99a3-9eec486a3437\") " pod="kube-system/kube-proxy-574km"
	Dec 02 16:16:39 default-k8s-diff-port-806420 kubelet[1310]: I1202 16:16:39.563848    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/17b96563-2832-47ee-9d04-8e27db1a3c6b-xtables-lock\") pod \"kindnet-pc8st\" (UID: \"17b96563-2832-47ee-9d04-8e27db1a3c6b\") " pod="kube-system/kindnet-pc8st"
	Dec 02 16:16:39 default-k8s-diff-port-806420 kubelet[1310]: I1202 16:16:39.563883    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjgsv\" (UniqueName: \"kubernetes.io/projected/17b96563-2832-47ee-9d04-8e27db1a3c6b-kube-api-access-qjgsv\") pod \"kindnet-pc8st\" (UID: \"17b96563-2832-47ee-9d04-8e27db1a3c6b\") " pod="kube-system/kindnet-pc8st"
	Dec 02 16:16:39 default-k8s-diff-port-806420 kubelet[1310]: I1202 16:16:39.563920    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3766b4e1-7e00-4229-99a3-9eec486a3437-xtables-lock\") pod \"kube-proxy-574km\" (UID: \"3766b4e1-7e00-4229-99a3-9eec486a3437\") " pod="kube-system/kube-proxy-574km"
	Dec 02 16:16:39 default-k8s-diff-port-806420 kubelet[1310]: I1202 16:16:39.563944    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjfvf\" (UniqueName: \"kubernetes.io/projected/3766b4e1-7e00-4229-99a3-9eec486a3437-kube-api-access-pjfvf\") pod \"kube-proxy-574km\" (UID: \"3766b4e1-7e00-4229-99a3-9eec486a3437\") " pod="kube-system/kube-proxy-574km"
	Dec 02 16:16:39 default-k8s-diff-port-806420 kubelet[1310]: I1202 16:16:39.563966    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/17b96563-2832-47ee-9d04-8e27db1a3c6b-cni-cfg\") pod \"kindnet-pc8st\" (UID: \"17b96563-2832-47ee-9d04-8e27db1a3c6b\") " pod="kube-system/kindnet-pc8st"
	Dec 02 16:16:39 default-k8s-diff-port-806420 kubelet[1310]: I1202 16:16:39.563988    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/17b96563-2832-47ee-9d04-8e27db1a3c6b-lib-modules\") pod \"kindnet-pc8st\" (UID: \"17b96563-2832-47ee-9d04-8e27db1a3c6b\") " pod="kube-system/kindnet-pc8st"
	Dec 02 16:16:39 default-k8s-diff-port-806420 kubelet[1310]: I1202 16:16:39.564022    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3766b4e1-7e00-4229-99a3-9eec486a3437-kube-proxy\") pod \"kube-proxy-574km\" (UID: \"3766b4e1-7e00-4229-99a3-9eec486a3437\") " pod="kube-system/kube-proxy-574km"
	Dec 02 16:16:39 default-k8s-diff-port-806420 kubelet[1310]: I1202 16:16:39.985662    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-pc8st" podStartSLOduration=0.985637447 podStartE2EDuration="985.637447ms" podCreationTimestamp="2025-12-02 16:16:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 16:16:39.985367163 +0000 UTC m=+6.134636788" watchObservedRunningTime="2025-12-02 16:16:39.985637447 +0000 UTC m=+6.134907073"
	Dec 02 16:16:39 default-k8s-diff-port-806420 kubelet[1310]: I1202 16:16:39.997311    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-574km" podStartSLOduration=0.997285587 podStartE2EDuration="997.285587ms" podCreationTimestamp="2025-12-02 16:16:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 16:16:39.997112106 +0000 UTC m=+6.146381729" watchObservedRunningTime="2025-12-02 16:16:39.997285587 +0000 UTC m=+6.146555213"
	Dec 02 16:16:50 default-k8s-diff-port-806420 kubelet[1310]: I1202 16:16:50.560473    1310 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 02 16:16:50 default-k8s-diff-port-806420 kubelet[1310]: I1202 16:16:50.662622    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b3d4301c-a3b1-4c90-bb80-045b48b75011-tmp\") pod \"storage-provisioner\" (UID: \"b3d4301c-a3b1-4c90-bb80-045b48b75011\") " pod="kube-system/storage-provisioner"
	Dec 02 16:16:50 default-k8s-diff-port-806420 kubelet[1310]: I1202 16:16:50.662705    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbkjb\" (UniqueName: \"kubernetes.io/projected/b3d4301c-a3b1-4c90-bb80-045b48b75011-kube-api-access-sbkjb\") pod \"storage-provisioner\" (UID: \"b3d4301c-a3b1-4c90-bb80-045b48b75011\") " pod="kube-system/storage-provisioner"
	Dec 02 16:16:50 default-k8s-diff-port-806420 kubelet[1310]: I1202 16:16:50.662736    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c832d8c-99dc-4663-a386-c48abaf9335e-config-volume\") pod \"coredns-66bc5c9577-6h6nr\" (UID: \"7c832d8c-99dc-4663-a386-c48abaf9335e\") " pod="kube-system/coredns-66bc5c9577-6h6nr"
	Dec 02 16:16:50 default-k8s-diff-port-806420 kubelet[1310]: I1202 16:16:50.662762    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cddmm\" (UniqueName: \"kubernetes.io/projected/7c832d8c-99dc-4663-a386-c48abaf9335e-kube-api-access-cddmm\") pod \"coredns-66bc5c9577-6h6nr\" (UID: \"7c832d8c-99dc-4663-a386-c48abaf9335e\") " pod="kube-system/coredns-66bc5c9577-6h6nr"
	Dec 02 16:16:51 default-k8s-diff-port-806420 kubelet[1310]: I1202 16:16:51.011321    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.01129716 podStartE2EDuration="12.01129716s" podCreationTimestamp="2025-12-02 16:16:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 16:16:51.010935881 +0000 UTC m=+17.160205516" watchObservedRunningTime="2025-12-02 16:16:51.01129716 +0000 UTC m=+17.160566778"
	Dec 02 16:16:53 default-k8s-diff-port-806420 kubelet[1310]: I1202 16:16:53.073405    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-6h6nr" podStartSLOduration=14.073374841 podStartE2EDuration="14.073374841s" podCreationTimestamp="2025-12-02 16:16:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 16:16:51.026101778 +0000 UTC m=+17.175371403" watchObservedRunningTime="2025-12-02 16:16:53.073374841 +0000 UTC m=+19.222644467"
	Dec 02 16:16:53 default-k8s-diff-port-806420 kubelet[1310]: I1202 16:16:53.176456    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d47l\" (UniqueName: \"kubernetes.io/projected/5fb97362-c18a-4a19-bcc3-d79520c4276f-kube-api-access-2d47l\") pod \"busybox\" (UID: \"5fb97362-c18a-4a19-bcc3-d79520c4276f\") " pod="default/busybox"
	
	
	==> storage-provisioner [c768697ab5113687e2665873b605c8aeb16bc4322aedf8e5a71f934f7e787798] <==
	I1202 16:16:50.965166       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1202 16:16:50.975699       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1202 16:16:50.975767       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1202 16:16:50.978512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:16:50.985713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1202 16:16:50.985872       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1202 16:16:50.986170       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-806420_3ea33bbc-f59b-4ae0-8792-86f826df3b6d!
	I1202 16:16:50.986171       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8b4c56b5-a58b-4bc5-b7a0-872e50c3350a", APIVersion:"v1", ResourceVersion:"405", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-806420_3ea33bbc-f59b-4ae0-8792-86f826df3b6d became leader
	W1202 16:16:50.990240       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:16:50.995871       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1202 16:16:51.086672       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-806420_3ea33bbc-f59b-4ae0-8792-86f826df3b6d!
	W1202 16:16:53.000325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:16:53.005819       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:16:55.010250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:16:55.015653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:16:57.019543       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:16:57.024439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:16:59.028261       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:16:59.033697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:01.037458       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:01.041755       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:03.046162       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:03.055476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-806420 -n default-k8s-diff-port-806420
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-806420 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-380588 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-380588 --alsologtostderr -v=1: exit status 80 (2.322036745s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-380588 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 16:17:42.566091  620323 out.go:360] Setting OutFile to fd 1 ...
	I1202 16:17:42.566280  620323 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:17:42.566295  620323 out.go:374] Setting ErrFile to fd 2...
	I1202 16:17:42.566301  620323 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:17:42.566629  620323 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 16:17:42.567007  620323 out.go:368] Setting JSON to false
	I1202 16:17:42.567033  620323 mustload.go:66] Loading cluster: old-k8s-version-380588
	I1202 16:17:42.567636  620323 config.go:182] Loaded profile config "old-k8s-version-380588": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1202 16:17:42.568284  620323 cli_runner.go:164] Run: docker container inspect old-k8s-version-380588 --format={{.State.Status}}
	I1202 16:17:42.593898  620323 host.go:66] Checking if "old-k8s-version-380588" exists ...
	I1202 16:17:42.594351  620323 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 16:17:42.669316  620323 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:87 SystemTime:2025-12-02 16:17:42.655782008 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 16:17:42.670296  620323 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-380588 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1202 16:17:42.673079  620323 out.go:179] * Pausing node old-k8s-version-380588 ... 
	I1202 16:17:42.674280  620323 host.go:66] Checking if "old-k8s-version-380588" exists ...
	I1202 16:17:42.674703  620323 ssh_runner.go:195] Run: systemctl --version
	I1202 16:17:42.674768  620323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-380588
	I1202 16:17:42.698763  620323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33240 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/old-k8s-version-380588/id_rsa Username:docker}
	I1202 16:17:42.809869  620323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 16:17:42.844874  620323 pause.go:52] kubelet running: true
	I1202 16:17:42.844989  620323 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 16:17:43.081996  620323 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 16:17:43.082098  620323 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 16:17:43.172138  620323 cri.go:89] found id: "c325d39d2a5d69fe2b31e92e4f9788a06cbe591e8f5b9a834b9dab65b20c1ac8"
	I1202 16:17:43.172176  620323 cri.go:89] found id: "e4ec4ba515fabd2712fb6c47a42ae38d829c32a9f5d6d7e8f7b2fff79861fe50"
	I1202 16:17:43.172183  620323 cri.go:89] found id: "f5fa23473c23570bed3b8cae515e1d47152a8bbcc1d833bbb220c14786e91242"
	I1202 16:17:43.172188  620323 cri.go:89] found id: "19a923b6f740b6e9edc34def00b3c0200695a3c12243306e18b73e7cba12f465"
	I1202 16:17:43.172192  620323 cri.go:89] found id: "a8f293ec5a85a4629b5301ed6f052814c79479439f97486c750e2d8f5e2ec1f5"
	I1202 16:17:43.172200  620323 cri.go:89] found id: "7f110a0363a9a5cf52f114e4eeb59c098716f360f4be3437bb75f0e0ddf16391"
	I1202 16:17:43.172206  620323 cri.go:89] found id: "6dfca71f4fbfde27fe7499c7118ecfb2f1add3481dc2e404f53badeec3d76a83"
	I1202 16:17:43.172210  620323 cri.go:89] found id: "4d3bf69c2ebc82ed7ac27121eb8894a9b4b6447e5a562f1e350b6d588d0ad01e"
	I1202 16:17:43.172214  620323 cri.go:89] found id: "6375989d507379cc812257f2c9f777cb49645b84e1445f665882a8f604b996ac"
	I1202 16:17:43.172238  620323 cri.go:89] found id: "5fe90db48aae47a0930bb877fdf9445c413630f895e40b0fe3908389dd557346"
	I1202 16:17:43.172250  620323 cri.go:89] found id: "d3b98e3ce5ef5315549e5e82069de566f733711b1c003f5dcf7e0fd0f2108a47"
	I1202 16:17:43.172255  620323 cri.go:89] found id: ""
	I1202 16:17:43.172305  620323 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 16:17:43.188963  620323 retry.go:31] will retry after 172.370096ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T16:17:43Z" level=error msg="open /run/runc: no such file or directory"
	I1202 16:17:43.362326  620323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 16:17:43.377056  620323 pause.go:52] kubelet running: false
	I1202 16:17:43.377126  620323 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 16:17:43.540315  620323 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 16:17:43.540394  620323 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 16:17:43.611499  620323 cri.go:89] found id: "c325d39d2a5d69fe2b31e92e4f9788a06cbe591e8f5b9a834b9dab65b20c1ac8"
	I1202 16:17:43.611522  620323 cri.go:89] found id: "e4ec4ba515fabd2712fb6c47a42ae38d829c32a9f5d6d7e8f7b2fff79861fe50"
	I1202 16:17:43.611527  620323 cri.go:89] found id: "f5fa23473c23570bed3b8cae515e1d47152a8bbcc1d833bbb220c14786e91242"
	I1202 16:17:43.611530  620323 cri.go:89] found id: "19a923b6f740b6e9edc34def00b3c0200695a3c12243306e18b73e7cba12f465"
	I1202 16:17:43.611534  620323 cri.go:89] found id: "a8f293ec5a85a4629b5301ed6f052814c79479439f97486c750e2d8f5e2ec1f5"
	I1202 16:17:43.611537  620323 cri.go:89] found id: "7f110a0363a9a5cf52f114e4eeb59c098716f360f4be3437bb75f0e0ddf16391"
	I1202 16:17:43.611540  620323 cri.go:89] found id: "6dfca71f4fbfde27fe7499c7118ecfb2f1add3481dc2e404f53badeec3d76a83"
	I1202 16:17:43.611543  620323 cri.go:89] found id: "4d3bf69c2ebc82ed7ac27121eb8894a9b4b6447e5a562f1e350b6d588d0ad01e"
	I1202 16:17:43.611546  620323 cri.go:89] found id: "6375989d507379cc812257f2c9f777cb49645b84e1445f665882a8f604b996ac"
	I1202 16:17:43.611562  620323 cri.go:89] found id: "5fe90db48aae47a0930bb877fdf9445c413630f895e40b0fe3908389dd557346"
	I1202 16:17:43.611566  620323 cri.go:89] found id: "d3b98e3ce5ef5315549e5e82069de566f733711b1c003f5dcf7e0fd0f2108a47"
	I1202 16:17:43.611568  620323 cri.go:89] found id: ""
	I1202 16:17:43.611608  620323 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 16:17:43.623772  620323 retry.go:31] will retry after 225.018346ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T16:17:43Z" level=error msg="open /run/runc: no such file or directory"
	I1202 16:17:43.848996  620323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 16:17:43.862206  620323 pause.go:52] kubelet running: false
	I1202 16:17:43.862271  620323 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 16:17:44.016385  620323 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 16:17:44.016513  620323 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 16:17:44.086270  620323 cri.go:89] found id: "c325d39d2a5d69fe2b31e92e4f9788a06cbe591e8f5b9a834b9dab65b20c1ac8"
	I1202 16:17:44.086296  620323 cri.go:89] found id: "e4ec4ba515fabd2712fb6c47a42ae38d829c32a9f5d6d7e8f7b2fff79861fe50"
	I1202 16:17:44.086300  620323 cri.go:89] found id: "f5fa23473c23570bed3b8cae515e1d47152a8bbcc1d833bbb220c14786e91242"
	I1202 16:17:44.086304  620323 cri.go:89] found id: "19a923b6f740b6e9edc34def00b3c0200695a3c12243306e18b73e7cba12f465"
	I1202 16:17:44.086307  620323 cri.go:89] found id: "a8f293ec5a85a4629b5301ed6f052814c79479439f97486c750e2d8f5e2ec1f5"
	I1202 16:17:44.086316  620323 cri.go:89] found id: "7f110a0363a9a5cf52f114e4eeb59c098716f360f4be3437bb75f0e0ddf16391"
	I1202 16:17:44.086319  620323 cri.go:89] found id: "6dfca71f4fbfde27fe7499c7118ecfb2f1add3481dc2e404f53badeec3d76a83"
	I1202 16:17:44.086322  620323 cri.go:89] found id: "4d3bf69c2ebc82ed7ac27121eb8894a9b4b6447e5a562f1e350b6d588d0ad01e"
	I1202 16:17:44.086325  620323 cri.go:89] found id: "6375989d507379cc812257f2c9f777cb49645b84e1445f665882a8f604b996ac"
	I1202 16:17:44.086331  620323 cri.go:89] found id: "5fe90db48aae47a0930bb877fdf9445c413630f895e40b0fe3908389dd557346"
	I1202 16:17:44.086333  620323 cri.go:89] found id: "d3b98e3ce5ef5315549e5e82069de566f733711b1c003f5dcf7e0fd0f2108a47"
	I1202 16:17:44.086336  620323 cri.go:89] found id: ""
	I1202 16:17:44.086373  620323 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 16:17:44.099594  620323 retry.go:31] will retry after 425.218011ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T16:17:44Z" level=error msg="open /run/runc: no such file or directory"
	I1202 16:17:44.525163  620323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 16:17:44.539391  620323 pause.go:52] kubelet running: false
	I1202 16:17:44.539502  620323 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 16:17:44.698897  620323 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 16:17:44.698978  620323 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 16:17:44.778859  620323 cri.go:89] found id: "c325d39d2a5d69fe2b31e92e4f9788a06cbe591e8f5b9a834b9dab65b20c1ac8"
	I1202 16:17:44.778889  620323 cri.go:89] found id: "e4ec4ba515fabd2712fb6c47a42ae38d829c32a9f5d6d7e8f7b2fff79861fe50"
	I1202 16:17:44.778895  620323 cri.go:89] found id: "f5fa23473c23570bed3b8cae515e1d47152a8bbcc1d833bbb220c14786e91242"
	I1202 16:17:44.778901  620323 cri.go:89] found id: "19a923b6f740b6e9edc34def00b3c0200695a3c12243306e18b73e7cba12f465"
	I1202 16:17:44.778917  620323 cri.go:89] found id: "a8f293ec5a85a4629b5301ed6f052814c79479439f97486c750e2d8f5e2ec1f5"
	I1202 16:17:44.778929  620323 cri.go:89] found id: "7f110a0363a9a5cf52f114e4eeb59c098716f360f4be3437bb75f0e0ddf16391"
	I1202 16:17:44.778934  620323 cri.go:89] found id: "6dfca71f4fbfde27fe7499c7118ecfb2f1add3481dc2e404f53badeec3d76a83"
	I1202 16:17:44.778938  620323 cri.go:89] found id: "4d3bf69c2ebc82ed7ac27121eb8894a9b4b6447e5a562f1e350b6d588d0ad01e"
	I1202 16:17:44.778942  620323 cri.go:89] found id: "6375989d507379cc812257f2c9f777cb49645b84e1445f665882a8f604b996ac"
	I1202 16:17:44.778951  620323 cri.go:89] found id: "5fe90db48aae47a0930bb877fdf9445c413630f895e40b0fe3908389dd557346"
	I1202 16:17:44.778956  620323 cri.go:89] found id: "d3b98e3ce5ef5315549e5e82069de566f733711b1c003f5dcf7e0fd0f2108a47"
	I1202 16:17:44.778960  620323 cri.go:89] found id: ""
	I1202 16:17:44.779006  620323 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 16:17:44.795850  620323 out.go:203] 
	W1202 16:17:44.797025  620323 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T16:17:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T16:17:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 16:17:44.797043  620323 out.go:285] * 
	* 
	W1202 16:17:44.802698  620323 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 16:17:44.805324  620323 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-380588 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-380588
helpers_test.go:243: (dbg) docker inspect old-k8s-version-380588:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a0a1616e8b44e3eee10890bb03aad62d5402afaed42de003f0e4ecec52bf4ef5",
	        "Created": "2025-12-02T16:15:24.388732142Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 607867,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T16:16:40.314561184Z",
	            "FinishedAt": "2025-12-02T16:16:39.283796401Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/a0a1616e8b44e3eee10890bb03aad62d5402afaed42de003f0e4ecec52bf4ef5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a0a1616e8b44e3eee10890bb03aad62d5402afaed42de003f0e4ecec52bf4ef5/hostname",
	        "HostsPath": "/var/lib/docker/containers/a0a1616e8b44e3eee10890bb03aad62d5402afaed42de003f0e4ecec52bf4ef5/hosts",
	        "LogPath": "/var/lib/docker/containers/a0a1616e8b44e3eee10890bb03aad62d5402afaed42de003f0e4ecec52bf4ef5/a0a1616e8b44e3eee10890bb03aad62d5402afaed42de003f0e4ecec52bf4ef5-json.log",
	        "Name": "/old-k8s-version-380588",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-380588:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-380588",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a0a1616e8b44e3eee10890bb03aad62d5402afaed42de003f0e4ecec52bf4ef5",
	                "LowerDir": "/var/lib/docker/overlay2/cc7db27ba93f361cedfb46f5902b70f222396dd2f79762e474c32c7912e9c9f1-init/diff:/var/lib/docker/overlay2/ab98578cee54140c21ba2edb7c02601b9799fbaa027f05ce4daaae66d198c082/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cc7db27ba93f361cedfb46f5902b70f222396dd2f79762e474c32c7912e9c9f1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cc7db27ba93f361cedfb46f5902b70f222396dd2f79762e474c32c7912e9c9f1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cc7db27ba93f361cedfb46f5902b70f222396dd2f79762e474c32c7912e9c9f1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-380588",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-380588/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-380588",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-380588",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-380588",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d3016d30e636ad5b26f68c8ba3434fae66fe6e447a05bf044d9eb87bd62d352a",
	            "SandboxKey": "/var/run/docker/netns/d3016d30e636",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33240"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33241"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33244"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33242"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33243"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-380588": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "12755aa6121ef84808d7e2051c86e67e4ac4ab231ddc7e94bd39dd8ca085a952",
	                    "EndpointID": "a232ca67e3089767be78ddc2fc5580ea520fc4739f992ce93a45eb049e021f59",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "fe:95:a6:8a:67:c1",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-380588",
	                        "a0a1616e8b44"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-380588 -n old-k8s-version-380588
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-380588 -n old-k8s-version-380588: exit status 2 (365.726155ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-380588 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-380588 logs -n 25: (1.294963132s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-589300 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ ssh     │ -p bridge-589300 sudo crio config                                                                                                                                                                                                             │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ delete  │ -p bridge-589300                                                                                                                                                                                                                              │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ delete  │ -p disable-driver-mounts-904481                                                                                                                                                                                                               │ disable-driver-mounts-904481 │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ start   │ -p default-k8s-diff-port-806420 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-380588 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	│ stop    │ -p old-k8s-version-380588 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ addons  │ enable metrics-server -p no-preload-534842 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	│ stop    │ -p no-preload-534842 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-380588 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ start   │ -p old-k8s-version-380588 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:17 UTC │
	│ addons  │ enable dashboard -p no-preload-534842 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ start   │ -p no-preload-534842 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:17 UTC │
	│ addons  │ enable metrics-server -p embed-certs-046271 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	│ stop    │ -p embed-certs-046271 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:17 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-806420 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-806420 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-046271 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ start   │ -p embed-certs-046271 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-806420 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ start   │ -p default-k8s-diff-port-806420 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │                     │
	│ image   │ old-k8s-version-380588 image list --format=json                                                                                                                                                                                               │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ pause   │ -p old-k8s-version-380588 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │                     │
	│ image   │ no-preload-534842 image list --format=json                                                                                                                                                                                                    │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ pause   │ -p no-preload-534842 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 16:17:22
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 16:17:22.498316  617021 out.go:360] Setting OutFile to fd 1 ...
	I1202 16:17:22.498682  617021 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:17:22.498698  617021 out.go:374] Setting ErrFile to fd 2...
	I1202 16:17:22.498706  617021 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:17:22.499020  617021 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 16:17:22.499708  617021 out.go:368] Setting JSON to false
	I1202 16:17:22.501327  617021 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":10783,"bootTime":1764681459,"procs":363,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 16:17:22.501399  617021 start.go:143] virtualization: kvm guest
	I1202 16:17:22.505282  617021 out.go:179] * [default-k8s-diff-port-806420] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 16:17:22.506595  617021 notify.go:221] Checking for updates...
	I1202 16:17:22.506646  617021 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 16:17:22.507981  617021 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 16:17:22.509145  617021 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 16:17:22.510227  617021 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-264555/.minikube
	I1202 16:17:22.511263  617021 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 16:17:22.512202  617021 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 16:17:22.513803  617021 config.go:182] Loaded profile config "default-k8s-diff-port-806420": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 16:17:22.514580  617021 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 16:17:22.546450  617021 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 16:17:22.546572  617021 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 16:17:22.614629  617021 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-02 16:17:22.602669456 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 16:17:22.614775  617021 docker.go:319] overlay module found
	I1202 16:17:22.616372  617021 out.go:179] * Using the docker driver based on existing profile
	I1202 16:17:20.554206  615191 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1202 16:17:20.554226  615191 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1202 16:17:20.554286  615191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-046271
	I1202 16:17:20.578798  615191 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 16:17:20.578835  615191 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 16:17:20.578900  615191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-046271
	I1202 16:17:20.590547  615191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33250 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/embed-certs-046271/id_rsa Username:docker}
	I1202 16:17:20.597866  615191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33250 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/embed-certs-046271/id_rsa Username:docker}
	I1202 16:17:20.608006  615191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33250 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/embed-certs-046271/id_rsa Username:docker}
	I1202 16:17:20.696829  615191 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 16:17:20.711938  615191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 16:17:20.715717  615191 node_ready.go:35] waiting up to 6m0s for node "embed-certs-046271" to be "Ready" ...
	I1202 16:17:20.724206  615191 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1202 16:17:20.724236  615191 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1202 16:17:20.733876  615191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 16:17:20.741340  615191 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1202 16:17:20.741367  615191 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1202 16:17:20.760344  615191 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1202 16:17:20.760372  615191 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1202 16:17:20.777477  615191 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1202 16:17:20.777507  615191 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1202 16:17:20.794322  615191 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1202 16:17:20.794352  615191 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1202 16:17:20.812771  615191 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1202 16:17:20.812806  615191 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1202 16:17:20.827575  615191 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1202 16:17:20.827606  615191 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1202 16:17:20.843608  615191 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1202 16:17:20.843637  615191 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1202 16:17:20.858834  615191 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 16:17:20.858862  615191 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1202 16:17:20.877363  615191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 16:17:22.050597  615191 node_ready.go:49] node "embed-certs-046271" is "Ready"
	I1202 16:17:22.050643  615191 node_ready.go:38] duration metric: took 1.334887125s for node "embed-certs-046271" to be "Ready" ...
	I1202 16:17:22.050670  615191 api_server.go:52] waiting for apiserver process to appear ...
	I1202 16:17:22.050729  615191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 16:17:22.687464  615191 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.975454995s)
	I1202 16:17:22.687522  615191 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.953605693s)
	I1202 16:17:22.687655  615191 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.810242956s)
	I1202 16:17:22.687712  615191 api_server.go:72] duration metric: took 2.165624029s to wait for apiserver process to appear ...
	I1202 16:17:22.617494  617021 start.go:309] selected driver: docker
	I1202 16:17:22.617510  617021 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-806420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-806420 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 16:17:22.617607  617021 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 16:17:22.618289  617021 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 16:17:22.687951  617021 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-02 16:17:22.676818567 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 16:17:22.688331  617021 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 16:17:22.688378  617021 cni.go:84] Creating CNI manager for ""
	I1202 16:17:22.688459  617021 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 16:17:22.688539  617021 start.go:353] cluster config:
	{Name:default-k8s-diff-port-806420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-806420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 16:17:22.687737  615191 api_server.go:88] waiting for apiserver healthz status ...
	I1202 16:17:22.687841  615191 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1202 16:17:22.689323  615191 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-046271 addons enable metrics-server
	
	I1202 16:17:22.690518  617021 out.go:179] * Starting "default-k8s-diff-port-806420" primary control-plane node in "default-k8s-diff-port-806420" cluster
	I1202 16:17:22.691442  617021 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 16:17:22.692381  617021 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 16:17:22.696323  615191 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 16:17:22.696349  615191 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 16:17:22.701692  615191 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1202 16:17:22.693673  617021 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 16:17:22.693741  617021 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 16:17:22.693782  617021 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22021-264555/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1202 16:17:22.693799  617021 cache.go:65] Caching tarball of preloaded images
	I1202 16:17:22.693901  617021 preload.go:238] Found /home/jenkins/minikube-integration/22021-264555/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 16:17:22.693915  617021 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 16:17:22.694040  617021 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/config.json ...
	I1202 16:17:22.717168  617021 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 16:17:22.717185  617021 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 16:17:22.717204  617021 cache.go:243] Successfully downloaded all kic artifacts
	I1202 16:17:22.717240  617021 start.go:360] acquireMachinesLock for default-k8s-diff-port-806420: {Name:mk8a961b68c6bbf9b1910f8ae43c90e49f86c0f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:17:22.717306  617021 start.go:364] duration metric: took 43.2µs to acquireMachinesLock for "default-k8s-diff-port-806420"
	I1202 16:17:22.717329  617021 start.go:96] Skipping create...Using existing machine configuration
	I1202 16:17:22.717337  617021 fix.go:54] fixHost starting: 
	I1202 16:17:22.717575  617021 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-806420 --format={{.State.Status}}
	I1202 16:17:22.736168  617021 fix.go:112] recreateIfNeeded on default-k8s-diff-port-806420: state=Stopped err=<nil>
	W1202 16:17:22.736197  617021 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 16:17:22.702818  615191 addons.go:530] duration metric: took 2.180728191s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1202 16:17:23.187965  615191 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1202 16:17:23.202226  615191 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 16:17:23.202260  615191 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 16:17:19.307997  609654 pod_ready.go:104] pod "coredns-7d764666f9-fxl4s" is not "Ready", error: <nil>
	W1202 16:17:21.806201  609654 pod_ready.go:104] pod "coredns-7d764666f9-fxl4s" is not "Ready", error: <nil>
	W1202 16:17:20.509898  607516 pod_ready.go:104] pod "coredns-5dd5756b68-fsfh2" is not "Ready", error: <nil>
	W1202 16:17:22.511187  607516 pod_ready.go:104] pod "coredns-5dd5756b68-fsfh2" is not "Ready", error: <nil>
	W1202 16:17:25.009769  607516 pod_ready.go:104] pod "coredns-5dd5756b68-fsfh2" is not "Ready", error: <nil>
	I1202 16:17:22.738049  617021 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-806420" ...
	I1202 16:17:22.738131  617021 cli_runner.go:164] Run: docker start default-k8s-diff-port-806420
	I1202 16:17:23.056389  617021 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-806420 --format={{.State.Status}}
	I1202 16:17:23.080845  617021 kic.go:430] container "default-k8s-diff-port-806420" state is running.
	I1202 16:17:23.081352  617021 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-806420
	I1202 16:17:23.104364  617021 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/config.json ...
	I1202 16:17:23.104731  617021 machine.go:94] provisionDockerMachine start ...
	I1202 16:17:23.104810  617021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:17:23.132129  617021 main.go:143] libmachine: Using SSH client type: native
	I1202 16:17:23.132593  617021 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33255 <nil> <nil>}
	I1202 16:17:23.132615  617021 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 16:17:23.133560  617021 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44490->127.0.0.1:33255: read: connection reset by peer
	I1202 16:17:26.278234  617021 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-806420
	
	I1202 16:17:26.278279  617021 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-806420"
	I1202 16:17:26.278370  617021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:17:26.298722  617021 main.go:143] libmachine: Using SSH client type: native
	I1202 16:17:26.298946  617021 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33255 <nil> <nil>}
	I1202 16:17:26.298961  617021 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-806420 && echo "default-k8s-diff-port-806420" | sudo tee /etc/hostname
	I1202 16:17:26.455925  617021 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-806420
	
	I1202 16:17:26.456010  617021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:17:26.475742  617021 main.go:143] libmachine: Using SSH client type: native
	I1202 16:17:26.476020  617021 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33255 <nil> <nil>}
	I1202 16:17:26.476041  617021 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-806420' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-806420/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-806420' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 16:17:26.621706  617021 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 16:17:26.621744  617021 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-264555/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-264555/.minikube}
	I1202 16:17:26.621776  617021 ubuntu.go:190] setting up certificates
	I1202 16:17:26.621791  617021 provision.go:84] configureAuth start
	I1202 16:17:26.621871  617021 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-806420
	I1202 16:17:26.646855  617021 provision.go:143] copyHostCerts
	I1202 16:17:26.646932  617021 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem, removing ...
	I1202 16:17:26.646949  617021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem
	I1202 16:17:26.647023  617021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem (1123 bytes)
	I1202 16:17:26.647146  617021 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem, removing ...
	I1202 16:17:26.647160  617021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem
	I1202 16:17:26.647202  617021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem (1675 bytes)
	I1202 16:17:26.647293  617021 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem, removing ...
	I1202 16:17:26.647305  617021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem
	I1202 16:17:26.647345  617021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem (1082 bytes)
	I1202 16:17:26.647443  617021 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-806420 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-806420 localhost minikube]
	I1202 16:17:26.754337  617021 provision.go:177] copyRemoteCerts
	I1202 16:17:26.754415  617021 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 16:17:26.754477  617021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:17:26.777385  617021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/default-k8s-diff-port-806420/id_rsa Username:docker}
	I1202 16:17:26.893005  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1202 16:17:26.918128  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 16:17:26.944489  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 16:17:26.970311  617021 provision.go:87] duration metric: took 348.497825ms to configureAuth
	I1202 16:17:26.970349  617021 ubuntu.go:206] setting minikube options for container-runtime
	I1202 16:17:26.970597  617021 config.go:182] Loaded profile config "default-k8s-diff-port-806420": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 16:17:26.970740  617021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:17:26.995213  617021 main.go:143] libmachine: Using SSH client type: native
	I1202 16:17:26.995551  617021 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33255 <nil> <nil>}
	I1202 16:17:26.995581  617021 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 16:17:23.688681  615191 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1202 16:17:23.693093  615191 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1202 16:17:23.694079  615191 api_server.go:141] control plane version: v1.34.2
	I1202 16:17:23.694104  615191 api_server.go:131] duration metric: took 1.006283162s to wait for apiserver health ...
	I1202 16:17:23.694113  615191 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 16:17:23.697817  615191 system_pods.go:59] 8 kube-system pods found
	I1202 16:17:23.697855  615191 system_pods.go:61] "coredns-66bc5c9577-f2vhx" [364e193c-f53a-4a43-b365-fe8364c3bd0f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 16:17:23.697865  615191 system_pods.go:61] "etcd-embed-certs-046271" [5b715b6b-8154-4ca8-9dc1-795be52cb8b2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 16:17:23.697876  615191 system_pods.go:61] "kindnet-wpj6k" [9249e8d2-e10c-4cae-bf04-cbf331109cf5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1202 16:17:23.697883  615191 system_pods.go:61] "kube-apiserver-embed-certs-046271" [f87f3619-f513-463f-bb69-acf168ec4ed0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 16:17:23.697892  615191 system_pods.go:61] "kube-controller-manager-embed-certs-046271" [bbdde76a-6098-496b-aaeb-2d61a714017a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 16:17:23.697899  615191 system_pods.go:61] "kube-proxy-q9pxb" [85574988-c836-4351-80bf-92683e782d91] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1202 16:17:23.697905  615191 system_pods.go:61] "kube-scheduler-embed-certs-046271" [d3b40c19-3363-443d-93f9-d2789b47d291] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 16:17:23.697910  615191 system_pods.go:61] "storage-provisioner" [5a625bd8-b8b8-4abc-b86a-d39218c7ffe3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 16:17:23.697918  615191 system_pods.go:74] duration metric: took 3.801084ms to wait for pod list to return data ...
	I1202 16:17:23.697926  615191 default_sa.go:34] waiting for default service account to be created ...
	I1202 16:17:23.700382  615191 default_sa.go:45] found service account: "default"
	I1202 16:17:23.700399  615191 default_sa.go:55] duration metric: took 2.466186ms for default service account to be created ...
	I1202 16:17:23.700407  615191 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 16:17:23.703139  615191 system_pods.go:86] 8 kube-system pods found
	I1202 16:17:23.703167  615191 system_pods.go:89] "coredns-66bc5c9577-f2vhx" [364e193c-f53a-4a43-b365-fe8364c3bd0f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 16:17:23.703178  615191 system_pods.go:89] "etcd-embed-certs-046271" [5b715b6b-8154-4ca8-9dc1-795be52cb8b2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 16:17:23.703189  615191 system_pods.go:89] "kindnet-wpj6k" [9249e8d2-e10c-4cae-bf04-cbf331109cf5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1202 16:17:23.703199  615191 system_pods.go:89] "kube-apiserver-embed-certs-046271" [f87f3619-f513-463f-bb69-acf168ec4ed0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 16:17:23.703214  615191 system_pods.go:89] "kube-controller-manager-embed-certs-046271" [bbdde76a-6098-496b-aaeb-2d61a714017a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 16:17:23.703227  615191 system_pods.go:89] "kube-proxy-q9pxb" [85574988-c836-4351-80bf-92683e782d91] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1202 16:17:23.703256  615191 system_pods.go:89] "kube-scheduler-embed-certs-046271" [d3b40c19-3363-443d-93f9-d2789b47d291] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 16:17:23.703268  615191 system_pods.go:89] "storage-provisioner" [5a625bd8-b8b8-4abc-b86a-d39218c7ffe3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 16:17:23.703278  615191 system_pods.go:126] duration metric: took 2.864031ms to wait for k8s-apps to be running ...
	I1202 16:17:23.703288  615191 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 16:17:23.703342  615191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 16:17:23.717127  615191 system_svc.go:56] duration metric: took 13.83377ms WaitForService to wait for kubelet
	I1202 16:17:23.717156  615191 kubeadm.go:587] duration metric: took 3.195075641s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 16:17:23.717179  615191 node_conditions.go:102] verifying NodePressure condition ...
	I1202 16:17:23.720108  615191 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 16:17:23.720130  615191 node_conditions.go:123] node cpu capacity is 8
	I1202 16:17:23.720143  615191 node_conditions.go:105] duration metric: took 2.959591ms to run NodePressure ...
	I1202 16:17:23.720159  615191 start.go:242] waiting for startup goroutines ...
	I1202 16:17:23.720169  615191 start.go:247] waiting for cluster config update ...
	I1202 16:17:23.720186  615191 start.go:256] writing updated cluster config ...
	I1202 16:17:23.720469  615191 ssh_runner.go:195] Run: rm -f paused
	I1202 16:17:23.724393  615191 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 16:17:23.728063  615191 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-f2vhx" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 16:17:25.734503  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	W1202 16:17:27.735550  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	W1202 16:17:23.807143  609654 pod_ready.go:104] pod "coredns-7d764666f9-fxl4s" is not "Ready", error: <nil>
	W1202 16:17:26.306569  609654 pod_ready.go:104] pod "coredns-7d764666f9-fxl4s" is not "Ready", error: <nil>
	W1202 16:17:28.307617  609654 pod_ready.go:104] pod "coredns-7d764666f9-fxl4s" is not "Ready", error: <nil>
	I1202 16:17:27.600994  617021 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 16:17:27.601027  617021 machine.go:97] duration metric: took 4.496275002s to provisionDockerMachine
	I1202 16:17:27.601043  617021 start.go:293] postStartSetup for "default-k8s-diff-port-806420" (driver="docker")
	I1202 16:17:27.601058  617021 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 16:17:27.601128  617021 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 16:17:27.601178  617021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:17:27.623246  617021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/default-k8s-diff-port-806420/id_rsa Username:docker}
	I1202 16:17:27.730663  617021 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 16:17:27.735877  617021 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 16:17:27.735907  617021 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 16:17:27.735918  617021 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-264555/.minikube/addons for local assets ...
	I1202 16:17:27.735966  617021 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-264555/.minikube/files for local assets ...
	I1202 16:17:27.736035  617021 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem -> 2680992.pem in /etc/ssl/certs
	I1202 16:17:27.736120  617021 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 16:17:27.745825  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem --> /etc/ssl/certs/2680992.pem (1708 bytes)
	I1202 16:17:27.768713  617021 start.go:296] duration metric: took 167.65018ms for postStartSetup
	I1202 16:17:27.768803  617021 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 16:17:27.768855  617021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:17:27.789992  617021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/default-k8s-diff-port-806420/id_rsa Username:docker}
	I1202 16:17:27.900148  617021 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 16:17:27.906371  617021 fix.go:56] duration metric: took 5.18902239s for fixHost
	I1202 16:17:27.906403  617021 start.go:83] releasing machines lock for "default-k8s-diff-port-806420", held for 5.189082645s
	I1202 16:17:27.906507  617021 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-806420
	I1202 16:17:27.929346  617021 ssh_runner.go:195] Run: cat /version.json
	I1202 16:17:27.929406  617021 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 16:17:27.929409  617021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:17:27.929492  617021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:17:27.952635  617021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/default-k8s-diff-port-806420/id_rsa Username:docker}
	I1202 16:17:27.954515  617021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/default-k8s-diff-port-806420/id_rsa Username:docker}
	I1202 16:17:28.138245  617021 ssh_runner.go:195] Run: systemctl --version
	I1202 16:17:28.147344  617021 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 16:17:28.198225  617021 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 16:17:28.204870  617021 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 16:17:28.204948  617021 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 16:17:28.216111  617021 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 16:17:28.216139  617021 start.go:496] detecting cgroup driver to use...
	I1202 16:17:28.216177  617021 detect.go:190] detected "systemd" cgroup driver on host os
	I1202 16:17:28.216233  617021 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 16:17:28.236312  617021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 16:17:28.253597  617021 docker.go:218] disabling cri-docker service (if available) ...
	I1202 16:17:28.253663  617021 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 16:17:28.274789  617021 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 16:17:28.292789  617021 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 16:17:28.400578  617021 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 16:17:28.502622  617021 docker.go:234] disabling docker service ...
	I1202 16:17:28.502709  617021 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 16:17:28.519863  617021 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 16:17:28.534627  617021 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 16:17:28.622884  617021 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 16:17:28.715766  617021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 16:17:28.728514  617021 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 16:17:28.743515  617021 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 16:17:28.743589  617021 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:28.752513  617021 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1202 16:17:28.752573  617021 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:28.761803  617021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:28.770820  617021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:28.779678  617021 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 16:17:28.788772  617021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:28.799817  617021 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:28.812207  617021 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:28.822959  617021 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 16:17:28.830615  617021 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 16:17:28.839315  617021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 16:17:28.935291  617021 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 16:17:29.312918  617021 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 16:17:29.312980  617021 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 16:17:29.316948  617021 start.go:564] Will wait 60s for crictl version
	I1202 16:17:29.316995  617021 ssh_runner.go:195] Run: which crictl
	I1202 16:17:29.320879  617021 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 16:17:29.346184  617021 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 16:17:29.346247  617021 ssh_runner.go:195] Run: crio --version
	I1202 16:17:29.374009  617021 ssh_runner.go:195] Run: crio --version
	I1202 16:17:29.405802  617021 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	W1202 16:17:27.010483  607516 pod_ready.go:104] pod "coredns-5dd5756b68-fsfh2" is not "Ready", error: <nil>
	I1202 16:17:29.009809  607516 pod_ready.go:94] pod "coredns-5dd5756b68-fsfh2" is "Ready"
	I1202 16:17:29.009836  607516 pod_ready.go:86] duration metric: took 38.00631225s for pod "coredns-5dd5756b68-fsfh2" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:29.012870  607516 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-380588" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:29.017277  607516 pod_ready.go:94] pod "etcd-old-k8s-version-380588" is "Ready"
	I1202 16:17:29.017298  607516 pod_ready.go:86] duration metric: took 4.40606ms for pod "etcd-old-k8s-version-380588" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:29.019970  607516 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-380588" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:29.023996  607516 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-380588" is "Ready"
	I1202 16:17:29.024017  607516 pod_ready.go:86] duration metric: took 4.027937ms for pod "kube-apiserver-old-k8s-version-380588" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:29.026488  607516 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-380588" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:29.207471  607516 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-380588" is "Ready"
	I1202 16:17:29.207497  607516 pod_ready.go:86] duration metric: took 180.991786ms for pod "kube-controller-manager-old-k8s-version-380588" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:29.408298  607516 pod_ready.go:83] waiting for pod "kube-proxy-jqstm" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:29.809129  607516 pod_ready.go:94] pod "kube-proxy-jqstm" is "Ready"
	I1202 16:17:29.809162  607516 pod_ready.go:86] duration metric: took 400.836367ms for pod "kube-proxy-jqstm" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:30.009989  607516 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-380588" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:30.408957  607516 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-380588" is "Ready"
	I1202 16:17:30.409044  607516 pod_ready.go:86] duration metric: took 399.025835ms for pod "kube-scheduler-old-k8s-version-380588" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:30.409070  607516 pod_ready.go:40] duration metric: took 39.411732547s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 16:17:30.482562  607516 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1202 16:17:30.484303  607516 out.go:203] 
	W1202 16:17:30.485747  607516 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1202 16:17:30.486932  607516 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1202 16:17:30.488134  607516 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-380588" cluster and "default" namespace by default
	I1202 16:17:29.407098  617021 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-806420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 16:17:29.424770  617021 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1202 16:17:29.429550  617021 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 16:17:29.439999  617021 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-806420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-806420 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 16:17:29.440104  617021 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 16:17:29.440140  617021 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 16:17:29.471019  617021 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 16:17:29.471045  617021 crio.go:433] Images already preloaded, skipping extraction
	I1202 16:17:29.471102  617021 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 16:17:29.496542  617021 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 16:17:29.496569  617021 cache_images.go:86] Images are preloaded, skipping loading
	I1202 16:17:29.496578  617021 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.2 crio true true} ...
	I1202 16:17:29.496701  617021 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-806420 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-806420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 16:17:29.496786  617021 ssh_runner.go:195] Run: crio config
	I1202 16:17:29.541566  617021 cni.go:84] Creating CNI manager for ""
	I1202 16:17:29.541586  617021 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 16:17:29.541596  617021 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 16:17:29.541616  617021 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-806420 NodeName:default-k8s-diff-port-806420 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 16:17:29.541728  617021 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-806420"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 16:17:29.541789  617021 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 16:17:29.550029  617021 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 16:17:29.550090  617021 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 16:17:29.558054  617021 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1202 16:17:29.571441  617021 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 16:17:29.584227  617021 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1202 16:17:29.597282  617021 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1202 16:17:29.601067  617021 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 16:17:29.611632  617021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 16:17:29.694704  617021 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 16:17:29.718170  617021 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420 for IP: 192.168.85.2
	I1202 16:17:29.718196  617021 certs.go:195] generating shared ca certs ...
	I1202 16:17:29.718216  617021 certs.go:227] acquiring lock for ca certs: {Name:mk039ff27816ff98157f54038cc23b17e408fc34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:17:29.718396  617021 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-264555/.minikube/ca.key
	I1202 16:17:29.718471  617021 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.key
	I1202 16:17:29.718486  617021 certs.go:257] generating profile certs ...
	I1202 16:17:29.718602  617021 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/client.key
	I1202 16:17:29.718693  617021 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/apiserver.key.20cb4091
	I1202 16:17:29.718752  617021 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/proxy-client.key
	I1202 16:17:29.718896  617021 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099.pem (1338 bytes)
	W1202 16:17:29.718940  617021 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099_empty.pem, impossibly tiny 0 bytes
	I1202 16:17:29.718953  617021 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem (1679 bytes)
	I1202 16:17:29.718990  617021 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem (1082 bytes)
	I1202 16:17:29.719023  617021 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem (1123 bytes)
	I1202 16:17:29.719054  617021 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem (1675 bytes)
	I1202 16:17:29.719109  617021 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem (1708 bytes)
	I1202 16:17:29.719924  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 16:17:29.741007  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 16:17:29.761350  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 16:17:29.780876  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 16:17:29.804308  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1202 16:17:29.825901  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 16:17:29.848908  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 16:17:29.867865  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 16:17:29.888652  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem --> /usr/share/ca-certificates/2680992.pem (1708 bytes)
	I1202 16:17:29.910779  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 16:17:29.932582  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099.pem --> /usr/share/ca-certificates/268099.pem (1338 bytes)
	I1202 16:17:29.956561  617021 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 16:17:29.972696  617021 ssh_runner.go:195] Run: openssl version
	I1202 16:17:29.980524  617021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2680992.pem && ln -fs /usr/share/ca-certificates/2680992.pem /etc/ssl/certs/2680992.pem"
	I1202 16:17:29.991411  617021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2680992.pem
	I1202 16:17:29.996151  617021 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 15:33 /usr/share/ca-certificates/2680992.pem
	I1202 16:17:29.996212  617021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2680992.pem
	I1202 16:17:30.050503  617021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2680992.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 16:17:30.061483  617021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 16:17:30.072491  617021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:17:30.077665  617021 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 15:16 /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:17:30.077718  617021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:17:30.129682  617021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 16:17:30.140657  617021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/268099.pem && ln -fs /usr/share/ca-certificates/268099.pem /etc/ssl/certs/268099.pem"
	I1202 16:17:30.152273  617021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/268099.pem
	I1202 16:17:30.157239  617021 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 15:33 /usr/share/ca-certificates/268099.pem
	I1202 16:17:30.157304  617021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/268099.pem
	I1202 16:17:30.211554  617021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/268099.pem /etc/ssl/certs/51391683.0"
	I1202 16:17:30.223094  617021 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 16:17:30.228304  617021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 16:17:30.285622  617021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 16:17:30.343619  617021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 16:17:30.405618  617021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 16:17:30.470279  617021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 16:17:30.533815  617021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 16:17:30.599554  617021 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-806420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-806420 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 16:17:30.599678  617021 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 16:17:30.599735  617021 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 16:17:30.654880  617021 cri.go:89] found id: "dd7adc25ca0d8fd13c03d582eb1846e44e7ca31363dd13737dfcd8541ae71f4a"
	I1202 16:17:30.654952  617021 cri.go:89] found id: "85a4f9f063a689e0c01b71338ce33ac27c1c4ef5a601031762f5f6f8468c7949"
	I1202 16:17:30.654958  617021 cri.go:89] found id: "fa204ce25b4b750a274bec528d833933338cbebe536dd59bd13e8ef6cec0cb00"
	I1202 16:17:30.654963  617021 cri.go:89] found id: "e986fe28a3e21e60cd56299b5d31eb8159c847908a86b5e9049cff20903959aa"
	I1202 16:17:30.654967  617021 cri.go:89] found id: ""
	I1202 16:17:30.655019  617021 ssh_runner.go:195] Run: sudo runc list -f json
	W1202 16:17:30.673871  617021 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T16:17:30Z" level=error msg="open /run/runc: no such file or directory"
	I1202 16:17:30.673941  617021 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 16:17:30.686769  617021 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 16:17:30.686797  617021 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 16:17:30.686844  617021 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 16:17:30.699192  617021 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 16:17:30.701520  617021 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-806420" does not appear in /home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 16:17:30.702957  617021 kubeconfig.go:62] /home/jenkins/minikube-integration/22021-264555/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-806420" cluster setting kubeconfig missing "default-k8s-diff-port-806420" context setting]
	I1202 16:17:30.704478  617021 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/kubeconfig: {Name:mk809d3f43352510256b48d000241cc8ee13f80d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:17:30.707218  617021 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 16:17:30.719927  617021 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1202 16:17:30.720026  617021 kubeadm.go:602] duration metric: took 33.222622ms to restartPrimaryControlPlane
	I1202 16:17:30.720048  617021 kubeadm.go:403] duration metric: took 120.509203ms to StartCluster
	I1202 16:17:30.720091  617021 settings.go:142] acquiring lock: {Name:mkb00b5395affa5a80ee09f21cfed53b1afcd59c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:17:30.720179  617021 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 16:17:30.723308  617021 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/kubeconfig: {Name:mk809d3f43352510256b48d000241cc8ee13f80d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:17:30.723718  617021 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 16:17:30.724045  617021 config.go:182] Loaded profile config "default-k8s-diff-port-806420": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 16:17:30.724081  617021 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 16:17:30.724157  617021 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-806420"
	I1202 16:17:30.724174  617021 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-806420"
	W1202 16:17:30.724182  617021 addons.go:248] addon storage-provisioner should already be in state true
	I1202 16:17:30.724203  617021 host.go:66] Checking if "default-k8s-diff-port-806420" exists ...
	I1202 16:17:30.724727  617021 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-806420 --format={{.State.Status}}
	I1202 16:17:30.724888  617021 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-806420"
	I1202 16:17:30.724906  617021 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-806420"
	W1202 16:17:30.724915  617021 addons.go:248] addon dashboard should already be in state true
	I1202 16:17:30.724912  617021 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-806420"
	I1202 16:17:30.724939  617021 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-806420"
	I1202 16:17:30.724944  617021 host.go:66] Checking if "default-k8s-diff-port-806420" exists ...
	I1202 16:17:30.725432  617021 out.go:179] * Verifying Kubernetes components...
	I1202 16:17:30.725507  617021 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-806420 --format={{.State.Status}}
	I1202 16:17:30.725453  617021 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-806420 --format={{.State.Status}}
	I1202 16:17:30.730554  617021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 16:17:30.764253  617021 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 16:17:30.765559  617021 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1202 16:17:30.765563  617021 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 16:17:30.765773  617021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 16:17:30.765913  617021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:17:30.771476  617021 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1202 16:17:30.772748  617021 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1202 16:17:30.772772  617021 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1202 16:17:30.772833  617021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:17:30.774089  617021 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-806420"
	W1202 16:17:30.774153  617021 addons.go:248] addon default-storageclass should already be in state true
	I1202 16:17:30.774196  617021 host.go:66] Checking if "default-k8s-diff-port-806420" exists ...
	I1202 16:17:30.774739  617021 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-806420 --format={{.State.Status}}
	I1202 16:17:30.805290  617021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/default-k8s-diff-port-806420/id_rsa Username:docker}
	I1202 16:17:30.815719  617021 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 16:17:30.815744  617021 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 16:17:30.815803  617021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:17:30.818534  617021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/default-k8s-diff-port-806420/id_rsa Username:docker}
	I1202 16:17:30.847757  617021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/default-k8s-diff-port-806420/id_rsa Username:docker}
	I1202 16:17:30.983053  617021 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 16:17:31.006025  617021 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-806420" to be "Ready" ...
	I1202 16:17:31.015709  617021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 16:17:31.044129  617021 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1202 16:17:31.044161  617021 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1202 16:17:31.080007  617021 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1202 16:17:31.080035  617021 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1202 16:17:31.089152  617021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 16:17:31.105968  617021 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1202 16:17:31.105999  617021 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1202 16:17:31.125794  617021 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1202 16:17:31.125819  617021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1202 16:17:31.146432  617021 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1202 16:17:31.146461  617021 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1202 16:17:31.166977  617021 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1202 16:17:31.167010  617021 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1202 16:17:31.185493  617021 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1202 16:17:31.185536  617021 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1202 16:17:31.204002  617021 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1202 16:17:31.204034  617021 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1202 16:17:31.223408  617021 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 16:17:31.223455  617021 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1202 16:17:31.243155  617021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 16:17:30.312353  609654 pod_ready.go:104] pod "coredns-7d764666f9-fxl4s" is not "Ready", error: <nil>
	I1202 16:17:31.311117  609654 pod_ready.go:94] pod "coredns-7d764666f9-fxl4s" is "Ready"
	I1202 16:17:31.311148  609654 pod_ready.go:86] duration metric: took 32.51010024s for pod "coredns-7d764666f9-fxl4s" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:31.314691  609654 pod_ready.go:83] waiting for pod "etcd-no-preload-534842" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:31.321620  609654 pod_ready.go:94] pod "etcd-no-preload-534842" is "Ready"
	I1202 16:17:31.321651  609654 pod_ready.go:86] duration metric: took 6.872089ms for pod "etcd-no-preload-534842" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:31.324914  609654 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-534842" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:31.330629  609654 pod_ready.go:94] pod "kube-apiserver-no-preload-534842" is "Ready"
	I1202 16:17:31.330663  609654 pod_ready.go:86] duration metric: took 5.720105ms for pod "kube-apiserver-no-preload-534842" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:31.333806  609654 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-534842" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:31.505747  609654 pod_ready.go:94] pod "kube-controller-manager-no-preload-534842" is "Ready"
	I1202 16:17:31.505784  609654 pod_ready.go:86] duration metric: took 171.955168ms for pod "kube-controller-manager-no-preload-534842" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:31.705911  609654 pod_ready.go:83] waiting for pod "kube-proxy-xqnrx" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:32.105456  609654 pod_ready.go:94] pod "kube-proxy-xqnrx" is "Ready"
	I1202 16:17:32.105487  609654 pod_ready.go:86] duration metric: took 399.544466ms for pod "kube-proxy-xqnrx" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:32.306457  609654 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-534842" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:32.705260  609654 pod_ready.go:94] pod "kube-scheduler-no-preload-534842" is "Ready"
	I1202 16:17:32.705298  609654 pod_ready.go:86] duration metric: took 398.794846ms for pod "kube-scheduler-no-preload-534842" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:32.705317  609654 pod_ready.go:40] duration metric: took 33.908136514s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 16:17:32.783728  609654 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1202 16:17:32.787599  609654 out.go:179] * Done! kubectl is now configured to use "no-preload-534842" cluster and "default" namespace by default
	W1202 16:17:30.238599  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	W1202 16:17:32.744223  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	I1202 16:17:32.896889  617021 node_ready.go:49] node "default-k8s-diff-port-806420" is "Ready"
	I1202 16:17:32.896991  617021 node_ready.go:38] duration metric: took 1.890924168s for node "default-k8s-diff-port-806420" to be "Ready" ...
	I1202 16:17:32.897022  617021 api_server.go:52] waiting for apiserver process to appear ...
	I1202 16:17:32.897106  617021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 16:17:33.630628  617021 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.614880216s)
	I1202 16:17:33.630702  617021 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.541520167s)
	I1202 16:17:33.630841  617021 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.387645959s)
	I1202 16:17:33.630867  617021 api_server.go:72] duration metric: took 2.907113913s to wait for apiserver process to appear ...
	I1202 16:17:33.630880  617021 api_server.go:88] waiting for apiserver healthz status ...
	I1202 16:17:33.630901  617021 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1202 16:17:33.633116  617021 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-806420 addons enable metrics-server
	
	I1202 16:17:33.635678  617021 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 16:17:33.635702  617021 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 16:17:33.639966  617021 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1202 16:17:33.641004  617021 addons.go:530] duration metric: took 2.916912715s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1202 16:17:34.131947  617021 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1202 16:17:34.137470  617021 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1202 16:17:34.138943  617021 api_server.go:141] control plane version: v1.34.2
	I1202 16:17:34.139019  617021 api_server.go:131] duration metric: took 508.129517ms to wait for apiserver health ...
	I1202 16:17:34.139043  617021 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 16:17:34.144346  617021 system_pods.go:59] 8 kube-system pods found
	I1202 16:17:34.144412  617021 system_pods.go:61] "coredns-66bc5c9577-6h6nr" [7c832d8c-99dc-4663-a386-c48abaf9335e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 16:17:34.144438  617021 system_pods.go:61] "etcd-default-k8s-diff-port-806420" [e47c28bd-c4ac-417c-92e4-2ed52662c35b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 16:17:34.144453  617021 system_pods.go:61] "kindnet-pc8st" [17b96563-2832-47ee-9d04-8e27db1a3c6b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1202 16:17:34.144461  617021 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-806420" [44c28fe6-dea2-4f64-989d-d69480bc7988] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 16:17:34.144472  617021 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-806420" [6e6342da-debb-4021-8cb1-adec092a866a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 16:17:34.144482  617021 system_pods.go:61] "kube-proxy-574km" [3766b4e1-7e00-4229-99a3-9eec486a3437] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1202 16:17:34.144495  617021 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-806420" [14951142-9cb5-4cf8-a095-d45123ec49da] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 16:17:34.144502  617021 system_pods.go:61] "storage-provisioner" [b3d4301c-a3b1-4c90-bb80-045b48b75011] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 16:17:34.144515  617021 system_pods.go:74] duration metric: took 5.454658ms to wait for pod list to return data ...
	I1202 16:17:34.144526  617021 default_sa.go:34] waiting for default service account to be created ...
	I1202 16:17:34.147568  617021 default_sa.go:45] found service account: "default"
	I1202 16:17:34.147593  617021 default_sa.go:55] duration metric: took 3.053699ms for default service account to be created ...
	I1202 16:17:34.147604  617021 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 16:17:34.151209  617021 system_pods.go:86] 8 kube-system pods found
	I1202 16:17:34.151246  617021 system_pods.go:89] "coredns-66bc5c9577-6h6nr" [7c832d8c-99dc-4663-a386-c48abaf9335e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 16:17:34.151258  617021 system_pods.go:89] "etcd-default-k8s-diff-port-806420" [e47c28bd-c4ac-417c-92e4-2ed52662c35b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 16:17:34.151270  617021 system_pods.go:89] "kindnet-pc8st" [17b96563-2832-47ee-9d04-8e27db1a3c6b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1202 16:17:34.151280  617021 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-806420" [44c28fe6-dea2-4f64-989d-d69480bc7988] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 16:17:34.151291  617021 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-806420" [6e6342da-debb-4021-8cb1-adec092a866a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 16:17:34.151299  617021 system_pods.go:89] "kube-proxy-574km" [3766b4e1-7e00-4229-99a3-9eec486a3437] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1202 16:17:34.151307  617021 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-806420" [14951142-9cb5-4cf8-a095-d45123ec49da] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 16:17:34.151315  617021 system_pods.go:89] "storage-provisioner" [b3d4301c-a3b1-4c90-bb80-045b48b75011] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 16:17:34.151325  617021 system_pods.go:126] duration metric: took 3.713746ms to wait for k8s-apps to be running ...
	I1202 16:17:34.151335  617021 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 16:17:34.151394  617021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 16:17:34.170938  617021 system_svc.go:56] duration metric: took 19.587588ms WaitForService to wait for kubelet
	I1202 16:17:34.170990  617021 kubeadm.go:587] duration metric: took 3.447228899s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 16:17:34.171017  617021 node_conditions.go:102] verifying NodePressure condition ...
	I1202 16:17:34.176230  617021 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 16:17:34.176264  617021 node_conditions.go:123] node cpu capacity is 8
	I1202 16:17:34.176284  617021 node_conditions.go:105] duration metric: took 5.260608ms to run NodePressure ...
	I1202 16:17:34.176300  617021 start.go:242] waiting for startup goroutines ...
	I1202 16:17:34.176309  617021 start.go:247] waiting for cluster config update ...
	I1202 16:17:34.176324  617021 start.go:256] writing updated cluster config ...
	I1202 16:17:34.176722  617021 ssh_runner.go:195] Run: rm -f paused
	I1202 16:17:34.181758  617021 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 16:17:34.185626  617021 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6h6nr" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 16:17:36.191101  617021 pod_ready.go:104] pod "coredns-66bc5c9577-6h6nr" is not "Ready", error: <nil>
	W1202 16:17:35.233349  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	W1202 16:17:37.234098  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	W1202 16:17:38.191695  617021 pod_ready.go:104] pod "coredns-66bc5c9577-6h6nr" is not "Ready", error: <nil>
	W1202 16:17:40.691815  617021 pod_ready.go:104] pod "coredns-66bc5c9577-6h6nr" is not "Ready", error: <nil>
	W1202 16:17:39.234621  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	W1202 16:17:41.734966  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 02 16:17:05 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:05.792509902Z" level=info msg="Starting container: 0288738cc8f7e5cbdd45b38523d3baad90888917cf8b2f4a56299f3138f1402b" id=3cc60342-60e5-4753-b586-9695fb175aaa name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 16:17:05 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:05.794557867Z" level=info msg="Started container" PID=1688 containerID=0288738cc8f7e5cbdd45b38523d3baad90888917cf8b2f4a56299f3138f1402b description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ftcrw/dashboard-metrics-scraper id=3cc60342-60e5-4753-b586-9695fb175aaa name=/runtime.v1.RuntimeService/StartContainer sandboxID=6992d4bf68678e6a29c4fbd4779bf788d9c80019221c348fc2578d610b220473
	Dec 02 16:17:06 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:06.755286813Z" level=info msg="Removing container: b98c9ddfa802dc6861934733f397432cdc14a2533e759114af75ba66b479bee7" id=49ad2dc2-9a68-4222-9576-cea0184cce7a name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 16:17:06 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:06.812773366Z" level=info msg="Removed container b98c9ddfa802dc6861934733f397432cdc14a2533e759114af75ba66b479bee7: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ftcrw/dashboard-metrics-scraper" id=49ad2dc2-9a68-4222-9576-cea0184cce7a name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 16:17:09 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:09.133477063Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029" id=92627f50-55a1-4b07-91a4-6689e38cba82 name=/runtime.v1.ImageService/PullImage
	Dec 02 16:17:09 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:09.134408537Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=a23be95d-dd41-4060-8fe7-d3a9a9522e98 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:17:09 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:09.136585873Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mwmcm/kubernetes-dashboard" id=ac7297ae-ca7f-40e3-b190-35b0aaeef93f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:17:09 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:09.136701476Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:17:09 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:09.140998019Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:17:09 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:09.141161872Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/d4ea7881034e0780c992068845154e4f2dd9041c0667bd3b4939b634708093e2/merged/etc/group: no such file or directory"
	Dec 02 16:17:09 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:09.141474823Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:17:09 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:09.170090805Z" level=info msg="Created container d3b98e3ce5ef5315549e5e82069de566f733711b1c003f5dcf7e0fd0f2108a47: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mwmcm/kubernetes-dashboard" id=ac7297ae-ca7f-40e3-b190-35b0aaeef93f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:17:09 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:09.170734073Z" level=info msg="Starting container: d3b98e3ce5ef5315549e5e82069de566f733711b1c003f5dcf7e0fd0f2108a47" id=c8b5d0bb-59e4-4e38-8919-a1e7dac1532c name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 16:17:09 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:09.172847834Z" level=info msg="Started container" PID=1738 containerID=d3b98e3ce5ef5315549e5e82069de566f733711b1c003f5dcf7e0fd0f2108a47 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mwmcm/kubernetes-dashboard id=c8b5d0bb-59e4-4e38-8919-a1e7dac1532c name=/runtime.v1.RuntimeService/StartContainer sandboxID=63e9fe12c2a794c8689789c2d2ff7f886c1d44c03408fe9fab0b62775cac1873
	Dec 02 16:17:27 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:27.664485268Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=acfd8df0-b243-4ae1-9570-5ea67fbb52da name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:17:27 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:27.665473018Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=71541a8a-fa26-47c4-827c-071700a6b39e name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:17:27 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:27.66667516Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ftcrw/dashboard-metrics-scraper" id=47a652d3-2452-4d5d-ae2f-f73b13e44d87 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:17:27 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:27.666815043Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:17:27 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:27.674503646Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:17:27 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:27.675173068Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:17:27 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:27.703559236Z" level=info msg="Created container 5fe90db48aae47a0930bb877fdf9445c413630f895e40b0fe3908389dd557346: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ftcrw/dashboard-metrics-scraper" id=47a652d3-2452-4d5d-ae2f-f73b13e44d87 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:17:27 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:27.704191775Z" level=info msg="Starting container: 5fe90db48aae47a0930bb877fdf9445c413630f895e40b0fe3908389dd557346" id=c602caaa-a8d7-4fdd-a39f-333baad66eb0 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 16:17:27 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:27.705940863Z" level=info msg="Started container" PID=1759 containerID=5fe90db48aae47a0930bb877fdf9445c413630f895e40b0fe3908389dd557346 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ftcrw/dashboard-metrics-scraper id=c602caaa-a8d7-4fdd-a39f-333baad66eb0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6992d4bf68678e6a29c4fbd4779bf788d9c80019221c348fc2578d610b220473
	Dec 02 16:17:27 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:27.802146876Z" level=info msg="Removing container: 0288738cc8f7e5cbdd45b38523d3baad90888917cf8b2f4a56299f3138f1402b" id=bae02b25-5d07-42f1-8e9f-f9afae2bba90 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 16:17:27 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:27.815760022Z" level=info msg="Removed container 0288738cc8f7e5cbdd45b38523d3baad90888917cf8b2f4a56299f3138f1402b: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ftcrw/dashboard-metrics-scraper" id=bae02b25-5d07-42f1-8e9f-f9afae2bba90 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	5fe90db48aae4       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           18 seconds ago      Exited              dashboard-metrics-scraper   2                   6992d4bf68678       dashboard-metrics-scraper-5f989dc9cf-ftcrw       kubernetes-dashboard
	d3b98e3ce5ef5       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   36 seconds ago      Running             kubernetes-dashboard        0                   63e9fe12c2a79       kubernetes-dashboard-8694d4445c-mwmcm            kubernetes-dashboard
	c325d39d2a5d6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Running             storage-provisioner         1                   6da9d5a9a1f50       storage-provisioner                              kube-system
	4a067bbe9ef75       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   8e27b39923567       busybox                                          default
	e4ec4ba515fab       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           55 seconds ago      Running             coredns                     0                   aa4f0a85a5631       coredns-5dd5756b68-fsfh2                         kube-system
	f5fa23473c235       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           55 seconds ago      Running             kube-proxy                  0                   b9c1cc909e5ff       kube-proxy-jqstm                                 kube-system
	19a923b6f740b       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   f3b2d1797efc9       kindnet-cd4m6                                    kube-system
	a8f293ec5a85a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   6da9d5a9a1f50       storage-provisioner                              kube-system
	7f110a0363a9a       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           58 seconds ago      Running             kube-apiserver              0                   83400cd0d7133       kube-apiserver-old-k8s-version-380588            kube-system
	6dfca71f4fbfd       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           58 seconds ago      Running             etcd                        0                   af60a00965510       etcd-old-k8s-version-380588                      kube-system
	4d3bf69c2ebc8       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           58 seconds ago      Running             kube-controller-manager     0                   3550e91549fb0       kube-controller-manager-old-k8s-version-380588   kube-system
	6375989d50737       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           58 seconds ago      Running             kube-scheduler              0                   c033ae3c0f490       kube-scheduler-old-k8s-version-380588            kube-system
	
	
	==> coredns [e4ec4ba515fabd2712fb6c47a42ae38d829c32a9f5d6d7e8f7b2fff79861fe50] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:51465 - 51879 "HINFO IN 6395565171171798244.7121814217675539279. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.024281714s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-380588
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-380588
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=old-k8s-version-380588
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T16_15_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 16:15:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-380588
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 16:17:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 16:17:20 +0000   Tue, 02 Dec 2025 16:15:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 16:17:20 +0000   Tue, 02 Dec 2025 16:15:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 16:17:20 +0000   Tue, 02 Dec 2025 16:15:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 16:17:20 +0000   Tue, 02 Dec 2025 16:16:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-380588
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                c883883f-eefb-4ccc-83df-e6ee2918146f
	  Boot ID:                    e00bac56-b076-4861-bc22-5d3b11269f73
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-5dd5756b68-fsfh2                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     110s
	  kube-system                 etcd-old-k8s-version-380588                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m3s
	  kube-system                 kindnet-cd4m6                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-old-k8s-version-380588             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-controller-manager-old-k8s-version-380588    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-proxy-jqstm                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-old-k8s-version-380588             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-ftcrw        0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-mwmcm             0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 109s               kube-proxy       
	  Normal  Starting                 55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m3s               kubelet          Node old-k8s-version-380588 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s               kubelet          Node old-k8s-version-380588 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m3s               kubelet          Node old-k8s-version-380588 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m3s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           110s               node-controller  Node old-k8s-version-380588 event: Registered Node old-k8s-version-380588 in Controller
	  Normal  NodeReady                96s                kubelet          Node old-k8s-version-380588 status is now: NodeReady
	  Normal  Starting                 59s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)  kubelet          Node old-k8s-version-380588 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)  kubelet          Node old-k8s-version-380588 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 59s)  kubelet          Node old-k8s-version-380588 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           44s                node-controller  Node old-k8s-version-380588 event: Registered Node old-k8s-version-380588 in Controller
	
	
	==> dmesg <==
	[  +0.000023] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[Dec 2 16:14] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ca bc 15 8e 4f 39 08 06
	[  +0.202375] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4a 25 86 21 45 76 08 06
	[  +7.441346] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 50 97 74 77 f9 08 06
	[  +0.000311] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 8c 8a 4d de f7 08 06
	[Dec 2 16:15] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 87 56 d2 46 1b 08 06
	[  +0.000909] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4a 25 86 21 45 76 08 06
	[  +7.449328] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a 06 ef 04 0a 22 08 06
	[ +17.731920] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff ae 8e 5c 48 83 60 08 06
	[  +2.165442] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0e 0b db fb 54 af 08 06
	[  +0.000320] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 3a 06 ef 04 0a 22 08 06
	[ +14.651928] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 5d 2d 15 78 ec 08 06
	[  +0.000385] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 8e 5c 48 83 60 08 06
	
	
	==> etcd [6dfca71f4fbfde27fe7499c7118ecfb2f1add3481dc2e404f53badeec3d76a83] <==
	{"level":"info","ts":"2025-12-02T16:16:47.218571Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-02T16:16:47.218583Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-02T16:16:47.2189Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2025-12-02T16:16:47.219049Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2025-12-02T16:16:47.21918Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-02T16:16:47.21922Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-02T16:16:47.222224Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-02T16:16:47.22261Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-02T16:16:47.222657Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-02T16:16:47.222737Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-02T16:16:47.222757Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-02T16:16:48.210738Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-02T16:16:48.210802Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-02T16:16:48.210844Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-02T16:16:48.210867Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-12-02T16:16:48.210874Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-02T16:16:48.210907Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-12-02T16:16:48.210922Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-02T16:16:48.212496Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-380588 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-02T16:16:48.212504Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-02T16:16:48.212516Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-02T16:16:48.21272Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-02T16:16:48.212744Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-02T16:16:48.213871Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-02T16:16:48.213969Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	
	
	==> kernel <==
	 16:17:46 up  3:00,  0 user,  load average: 4.51, 4.19, 2.72
	Linux old-k8s-version-380588 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [19a923b6f740b6e9edc34def00b3c0200695a3c12243306e18b73e7cba12f465] <==
	I1202 16:16:50.322766       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 16:16:50.323076       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1202 16:16:50.323288       1 main.go:148] setting mtu 1500 for CNI 
	I1202 16:16:50.323311       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 16:16:50.323352       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T16:16:50Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 16:16:50.618403       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 16:16:50.618475       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 16:16:50.618489       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 16:16:50.618805       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1202 16:16:51.119164       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 16:16:51.119198       1 metrics.go:72] Registering metrics
	I1202 16:16:51.119294       1 controller.go:711] "Syncing nftables rules"
	I1202 16:17:00.619284       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1202 16:17:00.619345       1 main.go:301] handling current node
	I1202 16:17:10.619200       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1202 16:17:10.619254       1 main.go:301] handling current node
	I1202 16:17:20.618610       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1202 16:17:20.618658       1 main.go:301] handling current node
	I1202 16:17:30.618644       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1202 16:17:30.618694       1 main.go:301] handling current node
	I1202 16:17:40.618953       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1202 16:17:40.619020       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7f110a0363a9a5cf52f114e4eeb59c098716f360f4be3437bb75f0e0ddf16391] <==
	I1202 16:16:49.378169       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 16:16:49.397584       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1202 16:16:49.411782       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1202 16:16:49.411893       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1202 16:16:49.411955       1 shared_informer.go:318] Caches are synced for configmaps
	I1202 16:16:49.412337       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1202 16:16:49.413059       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1202 16:16:49.413087       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1202 16:16:49.413140       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1202 16:16:49.413305       1 aggregator.go:166] initial CRD sync complete...
	I1202 16:16:49.413386       1 autoregister_controller.go:141] Starting autoregister controller
	I1202 16:16:49.413462       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1202 16:16:49.413503       1 cache.go:39] Caches are synced for autoregister controller
	E1202 16:16:49.424609       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1202 16:16:50.284626       1 controller.go:624] quota admission added evaluator for: namespaces
	I1202 16:16:50.317730       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1202 16:16:50.322328       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1202 16:16:50.355117       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 16:16:50.366216       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 16:16:50.378757       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1202 16:16:50.428465       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.78.111"}
	I1202 16:16:50.443764       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.210.90"}
	I1202 16:17:01.623633       1 controller.go:624] quota admission added evaluator for: endpoints
	I1202 16:17:01.697832       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1202 16:17:01.770710       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [4d3bf69c2ebc82ed7ac27121eb8894a9b4b6447e5a562f1e350b6d588d0ad01e] <==
	I1202 16:17:01.749355       1 range_allocator.go:178] "Starting range CIDR allocator"
	I1202 16:17:01.749401       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I1202 16:17:01.749414       1 shared_informer.go:318] Caches are synced for cidrallocator
	I1202 16:17:01.752168       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="86.67µs"
	I1202 16:17:01.755748       1 shared_informer.go:318] Caches are synced for daemon sets
	I1202 16:17:01.761103       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1202 16:17:01.770630       1 shared_informer.go:318] Caches are synced for GC
	I1202 16:17:01.789671       1 shared_informer.go:318] Caches are synced for TTL
	I1202 16:17:01.799161       1 shared_informer.go:318] Caches are synced for resource quota
	I1202 16:17:01.806734       1 shared_informer.go:318] Caches are synced for resource quota
	I1202 16:17:01.857036       1 shared_informer.go:318] Caches are synced for persistent volume
	I1202 16:17:01.859402       1 shared_informer.go:318] Caches are synced for PV protection
	I1202 16:17:01.893941       1 shared_informer.go:318] Caches are synced for attach detach
	I1202 16:17:02.216622       1 shared_informer.go:318] Caches are synced for garbage collector
	I1202 16:17:02.219837       1 shared_informer.go:318] Caches are synced for garbage collector
	I1202 16:17:02.219870       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1202 16:17:05.752643       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="141.09µs"
	I1202 16:17:06.760131       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="132.2µs"
	I1202 16:17:07.765361       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="66.743µs"
	I1202 16:17:09.772480       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="6.92858ms"
	I1202 16:17:09.772731       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="63.16µs"
	I1202 16:17:27.817118       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="64.627µs"
	I1202 16:17:28.805651       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.514539ms"
	I1202 16:17:28.805803       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="91.511µs"
	I1202 16:17:32.034791       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="77.347µs"
	
	
	==> kube-proxy [f5fa23473c23570bed3b8cae515e1d47152a8bbcc1d833bbb220c14786e91242] <==
	I1202 16:16:50.091017       1 server_others.go:69] "Using iptables proxy"
	I1202 16:16:50.102855       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1202 16:16:50.121772       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 16:16:50.124136       1 server_others.go:152] "Using iptables Proxier"
	I1202 16:16:50.124175       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1202 16:16:50.124182       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1202 16:16:50.124207       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1202 16:16:50.124442       1 server.go:846] "Version info" version="v1.28.0"
	I1202 16:16:50.124461       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 16:16:50.125227       1 config.go:97] "Starting endpoint slice config controller"
	I1202 16:16:50.125258       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1202 16:16:50.125285       1 config.go:188] "Starting service config controller"
	I1202 16:16:50.125288       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1202 16:16:50.126603       1 config.go:315] "Starting node config controller"
	I1202 16:16:50.126636       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1202 16:16:50.225796       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1202 16:16:50.225796       1 shared_informer.go:318] Caches are synced for service config
	I1202 16:16:50.227165       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [6375989d507379cc812257f2c9f777cb49645b84e1445f665882a8f604b996ac] <==
	I1202 16:16:47.808108       1 serving.go:348] Generated self-signed cert in-memory
	I1202 16:16:49.398101       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1202 16:16:49.398134       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 16:16:49.402985       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1202 16:16:49.402999       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1202 16:16:49.403018       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 16:16:49.403023       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1202 16:16:49.403033       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1202 16:16:49.403022       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1202 16:16:49.403949       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1202 16:16:49.404019       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1202 16:16:49.503583       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1202 16:16:49.503585       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1202 16:16:49.503593       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	
	
	==> kubelet <==
	Dec 02 16:17:01 old-k8s-version-380588 kubelet[733]: I1202 16:17:01.716997     733 topology_manager.go:215] "Topology Admit Handler" podUID="8ad75430-6092-4b71-92ab-1041a127ac88" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-ftcrw"
	Dec 02 16:17:01 old-k8s-version-380588 kubelet[733]: I1202 16:17:01.718873     733 topology_manager.go:215] "Topology Admit Handler" podUID="4a0441b6-699b-4b02-a86a-76b28b735c51" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-mwmcm"
	Dec 02 16:17:01 old-k8s-version-380588 kubelet[733]: I1202 16:17:01.735808     733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8ad75430-6092-4b71-92ab-1041a127ac88-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-ftcrw\" (UID: \"8ad75430-6092-4b71-92ab-1041a127ac88\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ftcrw"
	Dec 02 16:17:01 old-k8s-version-380588 kubelet[733]: I1202 16:17:01.735866     733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4a0441b6-699b-4b02-a86a-76b28b735c51-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-mwmcm\" (UID: \"4a0441b6-699b-4b02-a86a-76b28b735c51\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mwmcm"
	Dec 02 16:17:01 old-k8s-version-380588 kubelet[733]: I1202 16:17:01.735926     733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cmc6\" (UniqueName: \"kubernetes.io/projected/8ad75430-6092-4b71-92ab-1041a127ac88-kube-api-access-6cmc6\") pod \"dashboard-metrics-scraper-5f989dc9cf-ftcrw\" (UID: \"8ad75430-6092-4b71-92ab-1041a127ac88\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ftcrw"
	Dec 02 16:17:01 old-k8s-version-380588 kubelet[733]: I1202 16:17:01.735962     733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bzbf\" (UniqueName: \"kubernetes.io/projected/4a0441b6-699b-4b02-a86a-76b28b735c51-kube-api-access-8bzbf\") pod \"kubernetes-dashboard-8694d4445c-mwmcm\" (UID: \"4a0441b6-699b-4b02-a86a-76b28b735c51\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mwmcm"
	Dec 02 16:17:05 old-k8s-version-380588 kubelet[733]: I1202 16:17:05.739943     733 scope.go:117] "RemoveContainer" containerID="b98c9ddfa802dc6861934733f397432cdc14a2533e759114af75ba66b479bee7"
	Dec 02 16:17:06 old-k8s-version-380588 kubelet[733]: I1202 16:17:06.743933     733 scope.go:117] "RemoveContainer" containerID="b98c9ddfa802dc6861934733f397432cdc14a2533e759114af75ba66b479bee7"
	Dec 02 16:17:06 old-k8s-version-380588 kubelet[733]: I1202 16:17:06.744253     733 scope.go:117] "RemoveContainer" containerID="0288738cc8f7e5cbdd45b38523d3baad90888917cf8b2f4a56299f3138f1402b"
	Dec 02 16:17:06 old-k8s-version-380588 kubelet[733]: E1202 16:17:06.744639     733 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-ftcrw_kubernetes-dashboard(8ad75430-6092-4b71-92ab-1041a127ac88)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ftcrw" podUID="8ad75430-6092-4b71-92ab-1041a127ac88"
	Dec 02 16:17:07 old-k8s-version-380588 kubelet[733]: I1202 16:17:07.748764     733 scope.go:117] "RemoveContainer" containerID="0288738cc8f7e5cbdd45b38523d3baad90888917cf8b2f4a56299f3138f1402b"
	Dec 02 16:17:07 old-k8s-version-380588 kubelet[733]: E1202 16:17:07.749162     733 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-ftcrw_kubernetes-dashboard(8ad75430-6092-4b71-92ab-1041a127ac88)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ftcrw" podUID="8ad75430-6092-4b71-92ab-1041a127ac88"
	Dec 02 16:17:09 old-k8s-version-380588 kubelet[733]: I1202 16:17:09.765663     733 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mwmcm" podStartSLOduration=1.6761012210000001 podCreationTimestamp="2025-12-02 16:17:01 +0000 UTC" firstStartedPulling="2025-12-02 16:17:02.04428735 +0000 UTC m=+15.479999508" lastFinishedPulling="2025-12-02 16:17:09.133784887 +0000 UTC m=+22.569497041" observedRunningTime="2025-12-02 16:17:09.76529298 +0000 UTC m=+23.201005152" watchObservedRunningTime="2025-12-02 16:17:09.765598754 +0000 UTC m=+23.201310927"
	Dec 02 16:17:12 old-k8s-version-380588 kubelet[733]: I1202 16:17:12.020464     733 scope.go:117] "RemoveContainer" containerID="0288738cc8f7e5cbdd45b38523d3baad90888917cf8b2f4a56299f3138f1402b"
	Dec 02 16:17:12 old-k8s-version-380588 kubelet[733]: E1202 16:17:12.020836     733 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-ftcrw_kubernetes-dashboard(8ad75430-6092-4b71-92ab-1041a127ac88)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ftcrw" podUID="8ad75430-6092-4b71-92ab-1041a127ac88"
	Dec 02 16:17:27 old-k8s-version-380588 kubelet[733]: I1202 16:17:27.663904     733 scope.go:117] "RemoveContainer" containerID="0288738cc8f7e5cbdd45b38523d3baad90888917cf8b2f4a56299f3138f1402b"
	Dec 02 16:17:27 old-k8s-version-380588 kubelet[733]: I1202 16:17:27.800850     733 scope.go:117] "RemoveContainer" containerID="0288738cc8f7e5cbdd45b38523d3baad90888917cf8b2f4a56299f3138f1402b"
	Dec 02 16:17:27 old-k8s-version-380588 kubelet[733]: I1202 16:17:27.801092     733 scope.go:117] "RemoveContainer" containerID="5fe90db48aae47a0930bb877fdf9445c413630f895e40b0fe3908389dd557346"
	Dec 02 16:17:27 old-k8s-version-380588 kubelet[733]: E1202 16:17:27.804041     733 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-ftcrw_kubernetes-dashboard(8ad75430-6092-4b71-92ab-1041a127ac88)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ftcrw" podUID="8ad75430-6092-4b71-92ab-1041a127ac88"
	Dec 02 16:17:32 old-k8s-version-380588 kubelet[733]: I1202 16:17:32.019785     733 scope.go:117] "RemoveContainer" containerID="5fe90db48aae47a0930bb877fdf9445c413630f895e40b0fe3908389dd557346"
	Dec 02 16:17:32 old-k8s-version-380588 kubelet[733]: E1202 16:17:32.020258     733 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-ftcrw_kubernetes-dashboard(8ad75430-6092-4b71-92ab-1041a127ac88)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ftcrw" podUID="8ad75430-6092-4b71-92ab-1041a127ac88"
	Dec 02 16:17:43 old-k8s-version-380588 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 02 16:17:43 old-k8s-version-380588 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 02 16:17:43 old-k8s-version-380588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 16:17:43 old-k8s-version-380588 systemd[1]: kubelet.service: Consumed 1.693s CPU time.
	
	
	==> kubernetes-dashboard [d3b98e3ce5ef5315549e5e82069de566f733711b1c003f5dcf7e0fd0f2108a47] <==
	2025/12/02 16:17:09 Using namespace: kubernetes-dashboard
	2025/12/02 16:17:09 Using in-cluster config to connect to apiserver
	2025/12/02 16:17:09 Using secret token for csrf signing
	2025/12/02 16:17:09 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/02 16:17:09 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/02 16:17:09 Successful initial request to the apiserver, version: v1.28.0
	2025/12/02 16:17:09 Generating JWE encryption key
	2025/12/02 16:17:09 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/02 16:17:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/02 16:17:09 Initializing JWE encryption key from synchronized object
	2025/12/02 16:17:09 Creating in-cluster Sidecar client
	2025/12/02 16:17:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/02 16:17:09 Serving insecurely on HTTP port: 9090
	2025/12/02 16:17:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/02 16:17:09 Starting overwatch
	
	
	==> storage-provisioner [a8f293ec5a85a4629b5301ed6f052814c79479439f97486c750e2d8f5e2ec1f5] <==
	I1202 16:16:50.055520       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1202 16:16:50.059507       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [c325d39d2a5d69fe2b31e92e4f9788a06cbe591e8f5b9a834b9dab65b20c1ac8] <==
	I1202 16:16:50.739810       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1202 16:16:50.748320       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1202 16:16:50.748366       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1202 16:17:08.146371       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1202 16:17:08.146595       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-380588_6b6485f8-35d5-4f54-b39c-cbe40277c4ae!
	I1202 16:17:08.146754       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fc3102a2-5536-4fab-baaf-1e9e658904c7", APIVersion:"v1", ResourceVersion:"580", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-380588_6b6485f8-35d5-4f54-b39c-cbe40277c4ae became leader
	I1202 16:17:08.246760       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-380588_6b6485f8-35d5-4f54-b39c-cbe40277c4ae!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-380588 -n old-k8s-version-380588
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-380588 -n old-k8s-version-380588: exit status 2 (350.798105ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-380588 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-380588
helpers_test.go:243: (dbg) docker inspect old-k8s-version-380588:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a0a1616e8b44e3eee10890bb03aad62d5402afaed42de003f0e4ecec52bf4ef5",
	        "Created": "2025-12-02T16:15:24.388732142Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 607867,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T16:16:40.314561184Z",
	            "FinishedAt": "2025-12-02T16:16:39.283796401Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/a0a1616e8b44e3eee10890bb03aad62d5402afaed42de003f0e4ecec52bf4ef5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a0a1616e8b44e3eee10890bb03aad62d5402afaed42de003f0e4ecec52bf4ef5/hostname",
	        "HostsPath": "/var/lib/docker/containers/a0a1616e8b44e3eee10890bb03aad62d5402afaed42de003f0e4ecec52bf4ef5/hosts",
	        "LogPath": "/var/lib/docker/containers/a0a1616e8b44e3eee10890bb03aad62d5402afaed42de003f0e4ecec52bf4ef5/a0a1616e8b44e3eee10890bb03aad62d5402afaed42de003f0e4ecec52bf4ef5-json.log",
	        "Name": "/old-k8s-version-380588",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-380588:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-380588",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a0a1616e8b44e3eee10890bb03aad62d5402afaed42de003f0e4ecec52bf4ef5",
	                "LowerDir": "/var/lib/docker/overlay2/cc7db27ba93f361cedfb46f5902b70f222396dd2f79762e474c32c7912e9c9f1-init/diff:/var/lib/docker/overlay2/ab98578cee54140c21ba2edb7c02601b9799fbaa027f05ce4daaae66d198c082/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cc7db27ba93f361cedfb46f5902b70f222396dd2f79762e474c32c7912e9c9f1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cc7db27ba93f361cedfb46f5902b70f222396dd2f79762e474c32c7912e9c9f1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cc7db27ba93f361cedfb46f5902b70f222396dd2f79762e474c32c7912e9c9f1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-380588",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-380588/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-380588",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-380588",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-380588",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d3016d30e636ad5b26f68c8ba3434fae66fe6e447a05bf044d9eb87bd62d352a",
	            "SandboxKey": "/var/run/docker/netns/d3016d30e636",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33240"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33241"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33244"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33242"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33243"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-380588": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "12755aa6121ef84808d7e2051c86e67e4ac4ab231ddc7e94bd39dd8ca085a952",
	                    "EndpointID": "a232ca67e3089767be78ddc2fc5580ea520fc4739f992ce93a45eb049e021f59",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "fe:95:a6:8a:67:c1",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-380588",
	                        "a0a1616e8b44"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-380588 -n old-k8s-version-380588
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-380588 -n old-k8s-version-380588: exit status 2 (383.051904ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-380588 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-380588 logs -n 25: (1.229872491s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-589300 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ ssh     │ -p bridge-589300 sudo crio config                                                                                                                                                                                                             │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ delete  │ -p bridge-589300                                                                                                                                                                                                                              │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ delete  │ -p disable-driver-mounts-904481                                                                                                                                                                                                               │ disable-driver-mounts-904481 │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ start   │ -p default-k8s-diff-port-806420 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-380588 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	│ stop    │ -p old-k8s-version-380588 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ addons  │ enable metrics-server -p no-preload-534842 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	│ stop    │ -p no-preload-534842 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-380588 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ start   │ -p old-k8s-version-380588 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:17 UTC │
	│ addons  │ enable dashboard -p no-preload-534842 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ start   │ -p no-preload-534842 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:17 UTC │
	│ addons  │ enable metrics-server -p embed-certs-046271 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	│ stop    │ -p embed-certs-046271 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:17 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-806420 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-806420 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-046271 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ start   │ -p embed-certs-046271 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-806420 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ start   │ -p default-k8s-diff-port-806420 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │                     │
	│ image   │ old-k8s-version-380588 image list --format=json                                                                                                                                                                                               │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ pause   │ -p old-k8s-version-380588 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │                     │
	│ image   │ no-preload-534842 image list --format=json                                                                                                                                                                                                    │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ pause   │ -p no-preload-534842 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 16:17:22
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 16:17:22.498316  617021 out.go:360] Setting OutFile to fd 1 ...
	I1202 16:17:22.498682  617021 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:17:22.498698  617021 out.go:374] Setting ErrFile to fd 2...
	I1202 16:17:22.498706  617021 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:17:22.499020  617021 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 16:17:22.499708  617021 out.go:368] Setting JSON to false
	I1202 16:17:22.501327  617021 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":10783,"bootTime":1764681459,"procs":363,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 16:17:22.501399  617021 start.go:143] virtualization: kvm guest
	I1202 16:17:22.505282  617021 out.go:179] * [default-k8s-diff-port-806420] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 16:17:22.506595  617021 notify.go:221] Checking for updates...
	I1202 16:17:22.506646  617021 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 16:17:22.507981  617021 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 16:17:22.509145  617021 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 16:17:22.510227  617021 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-264555/.minikube
	I1202 16:17:22.511263  617021 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 16:17:22.512202  617021 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 16:17:22.513803  617021 config.go:182] Loaded profile config "default-k8s-diff-port-806420": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 16:17:22.514580  617021 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 16:17:22.546450  617021 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 16:17:22.546572  617021 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 16:17:22.614629  617021 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-02 16:17:22.602669456 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 16:17:22.614775  617021 docker.go:319] overlay module found
	I1202 16:17:22.616372  617021 out.go:179] * Using the docker driver based on existing profile
	I1202 16:17:20.554206  615191 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1202 16:17:20.554226  615191 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1202 16:17:20.554286  615191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-046271
	I1202 16:17:20.578798  615191 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 16:17:20.578835  615191 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 16:17:20.578900  615191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-046271
	I1202 16:17:20.590547  615191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33250 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/embed-certs-046271/id_rsa Username:docker}
	I1202 16:17:20.597866  615191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33250 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/embed-certs-046271/id_rsa Username:docker}
	I1202 16:17:20.608006  615191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33250 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/embed-certs-046271/id_rsa Username:docker}
	I1202 16:17:20.696829  615191 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 16:17:20.711938  615191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 16:17:20.715717  615191 node_ready.go:35] waiting up to 6m0s for node "embed-certs-046271" to be "Ready" ...
	I1202 16:17:20.724206  615191 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1202 16:17:20.724236  615191 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1202 16:17:20.733876  615191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 16:17:20.741340  615191 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1202 16:17:20.741367  615191 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1202 16:17:20.760344  615191 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1202 16:17:20.760372  615191 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1202 16:17:20.777477  615191 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1202 16:17:20.777507  615191 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1202 16:17:20.794322  615191 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1202 16:17:20.794352  615191 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1202 16:17:20.812771  615191 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1202 16:17:20.812806  615191 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1202 16:17:20.827575  615191 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1202 16:17:20.827606  615191 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1202 16:17:20.843608  615191 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1202 16:17:20.843637  615191 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1202 16:17:20.858834  615191 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 16:17:20.858862  615191 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1202 16:17:20.877363  615191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 16:17:22.050597  615191 node_ready.go:49] node "embed-certs-046271" is "Ready"
	I1202 16:17:22.050643  615191 node_ready.go:38] duration metric: took 1.334887125s for node "embed-certs-046271" to be "Ready" ...
	I1202 16:17:22.050670  615191 api_server.go:52] waiting for apiserver process to appear ...
	I1202 16:17:22.050729  615191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 16:17:22.687464  615191 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.975454995s)
	I1202 16:17:22.687522  615191 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.953605693s)
	I1202 16:17:22.687655  615191 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.810242956s)
	I1202 16:17:22.687712  615191 api_server.go:72] duration metric: took 2.165624029s to wait for apiserver process to appear ...
	I1202 16:17:22.617494  617021 start.go:309] selected driver: docker
	I1202 16:17:22.617510  617021 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-806420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-806420 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 16:17:22.617607  617021 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 16:17:22.618289  617021 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 16:17:22.687951  617021 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-02 16:17:22.676818567 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 16:17:22.688331  617021 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 16:17:22.688378  617021 cni.go:84] Creating CNI manager for ""
	I1202 16:17:22.688459  617021 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 16:17:22.688539  617021 start.go:353] cluster config:
	{Name:default-k8s-diff-port-806420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-806420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 16:17:22.687737  615191 api_server.go:88] waiting for apiserver healthz status ...
	I1202 16:17:22.687841  615191 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1202 16:17:22.689323  615191 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-046271 addons enable metrics-server
	
	I1202 16:17:22.690518  617021 out.go:179] * Starting "default-k8s-diff-port-806420" primary control-plane node in "default-k8s-diff-port-806420" cluster
	I1202 16:17:22.691442  617021 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 16:17:22.692381  617021 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 16:17:22.696323  615191 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 16:17:22.696349  615191 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 16:17:22.701692  615191 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1202 16:17:22.693673  617021 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 16:17:22.693741  617021 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 16:17:22.693782  617021 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22021-264555/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1202 16:17:22.693799  617021 cache.go:65] Caching tarball of preloaded images
	I1202 16:17:22.693901  617021 preload.go:238] Found /home/jenkins/minikube-integration/22021-264555/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 16:17:22.693915  617021 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 16:17:22.694040  617021 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/config.json ...
	I1202 16:17:22.717168  617021 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 16:17:22.717185  617021 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 16:17:22.717204  617021 cache.go:243] Successfully downloaded all kic artifacts
	I1202 16:17:22.717240  617021 start.go:360] acquireMachinesLock for default-k8s-diff-port-806420: {Name:mk8a961b68c6bbf9b1910f8ae43c90e49f86c0f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:17:22.717306  617021 start.go:364] duration metric: took 43.2µs to acquireMachinesLock for "default-k8s-diff-port-806420"
	I1202 16:17:22.717329  617021 start.go:96] Skipping create...Using existing machine configuration
	I1202 16:17:22.717337  617021 fix.go:54] fixHost starting: 
	I1202 16:17:22.717575  617021 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-806420 --format={{.State.Status}}
	I1202 16:17:22.736168  617021 fix.go:112] recreateIfNeeded on default-k8s-diff-port-806420: state=Stopped err=<nil>
	W1202 16:17:22.736197  617021 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 16:17:22.702818  615191 addons.go:530] duration metric: took 2.180728191s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1202 16:17:23.187965  615191 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1202 16:17:23.202226  615191 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 16:17:23.202260  615191 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 16:17:19.307997  609654 pod_ready.go:104] pod "coredns-7d764666f9-fxl4s" is not "Ready", error: <nil>
	W1202 16:17:21.806201  609654 pod_ready.go:104] pod "coredns-7d764666f9-fxl4s" is not "Ready", error: <nil>
	W1202 16:17:20.509898  607516 pod_ready.go:104] pod "coredns-5dd5756b68-fsfh2" is not "Ready", error: <nil>
	W1202 16:17:22.511187  607516 pod_ready.go:104] pod "coredns-5dd5756b68-fsfh2" is not "Ready", error: <nil>
	W1202 16:17:25.009769  607516 pod_ready.go:104] pod "coredns-5dd5756b68-fsfh2" is not "Ready", error: <nil>
	I1202 16:17:22.738049  617021 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-806420" ...
	I1202 16:17:22.738131  617021 cli_runner.go:164] Run: docker start default-k8s-diff-port-806420
	I1202 16:17:23.056389  617021 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-806420 --format={{.State.Status}}
	I1202 16:17:23.080845  617021 kic.go:430] container "default-k8s-diff-port-806420" state is running.
	I1202 16:17:23.081352  617021 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-806420
	I1202 16:17:23.104364  617021 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/config.json ...
	I1202 16:17:23.104731  617021 machine.go:94] provisionDockerMachine start ...
	I1202 16:17:23.104810  617021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:17:23.132129  617021 main.go:143] libmachine: Using SSH client type: native
	I1202 16:17:23.132593  617021 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33255 <nil> <nil>}
	I1202 16:17:23.132615  617021 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 16:17:23.133560  617021 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44490->127.0.0.1:33255: read: connection reset by peer
	I1202 16:17:26.278234  617021 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-806420
	
	I1202 16:17:26.278279  617021 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-806420"
	I1202 16:17:26.278370  617021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:17:26.298722  617021 main.go:143] libmachine: Using SSH client type: native
	I1202 16:17:26.298946  617021 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33255 <nil> <nil>}
	I1202 16:17:26.298961  617021 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-806420 && echo "default-k8s-diff-port-806420" | sudo tee /etc/hostname
	I1202 16:17:26.455925  617021 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-806420
	
	I1202 16:17:26.456010  617021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:17:26.475742  617021 main.go:143] libmachine: Using SSH client type: native
	I1202 16:17:26.476020  617021 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33255 <nil> <nil>}
	I1202 16:17:26.476041  617021 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-806420' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-806420/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-806420' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 16:17:26.621706  617021 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 16:17:26.621744  617021 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-264555/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-264555/.minikube}
	I1202 16:17:26.621776  617021 ubuntu.go:190] setting up certificates
	I1202 16:17:26.621791  617021 provision.go:84] configureAuth start
	I1202 16:17:26.621871  617021 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-806420
	I1202 16:17:26.646855  617021 provision.go:143] copyHostCerts
	I1202 16:17:26.646932  617021 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem, removing ...
	I1202 16:17:26.646949  617021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem
	I1202 16:17:26.647023  617021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem (1123 bytes)
	I1202 16:17:26.647146  617021 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem, removing ...
	I1202 16:17:26.647160  617021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem
	I1202 16:17:26.647202  617021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem (1675 bytes)
	I1202 16:17:26.647293  617021 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem, removing ...
	I1202 16:17:26.647305  617021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem
	I1202 16:17:26.647345  617021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem (1082 bytes)
	I1202 16:17:26.647443  617021 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-806420 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-806420 localhost minikube]
	I1202 16:17:26.754337  617021 provision.go:177] copyRemoteCerts
	I1202 16:17:26.754415  617021 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 16:17:26.754477  617021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:17:26.777385  617021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/default-k8s-diff-port-806420/id_rsa Username:docker}
	I1202 16:17:26.893005  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1202 16:17:26.918128  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 16:17:26.944489  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 16:17:26.970311  617021 provision.go:87] duration metric: took 348.497825ms to configureAuth
	I1202 16:17:26.970349  617021 ubuntu.go:206] setting minikube options for container-runtime
	I1202 16:17:26.970597  617021 config.go:182] Loaded profile config "default-k8s-diff-port-806420": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 16:17:26.970740  617021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:17:26.995213  617021 main.go:143] libmachine: Using SSH client type: native
	I1202 16:17:26.995551  617021 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33255 <nil> <nil>}
	I1202 16:17:26.995581  617021 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 16:17:23.688681  615191 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1202 16:17:23.693093  615191 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1202 16:17:23.694079  615191 api_server.go:141] control plane version: v1.34.2
	I1202 16:17:23.694104  615191 api_server.go:131] duration metric: took 1.006283162s to wait for apiserver health ...
	I1202 16:17:23.694113  615191 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 16:17:23.697817  615191 system_pods.go:59] 8 kube-system pods found
	I1202 16:17:23.697855  615191 system_pods.go:61] "coredns-66bc5c9577-f2vhx" [364e193c-f53a-4a43-b365-fe8364c3bd0f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 16:17:23.697865  615191 system_pods.go:61] "etcd-embed-certs-046271" [5b715b6b-8154-4ca8-9dc1-795be52cb8b2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 16:17:23.697876  615191 system_pods.go:61] "kindnet-wpj6k" [9249e8d2-e10c-4cae-bf04-cbf331109cf5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1202 16:17:23.697883  615191 system_pods.go:61] "kube-apiserver-embed-certs-046271" [f87f3619-f513-463f-bb69-acf168ec4ed0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 16:17:23.697892  615191 system_pods.go:61] "kube-controller-manager-embed-certs-046271" [bbdde76a-6098-496b-aaeb-2d61a714017a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 16:17:23.697899  615191 system_pods.go:61] "kube-proxy-q9pxb" [85574988-c836-4351-80bf-92683e782d91] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1202 16:17:23.697905  615191 system_pods.go:61] "kube-scheduler-embed-certs-046271" [d3b40c19-3363-443d-93f9-d2789b47d291] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 16:17:23.697910  615191 system_pods.go:61] "storage-provisioner" [5a625bd8-b8b8-4abc-b86a-d39218c7ffe3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 16:17:23.697918  615191 system_pods.go:74] duration metric: took 3.801084ms to wait for pod list to return data ...
	I1202 16:17:23.697926  615191 default_sa.go:34] waiting for default service account to be created ...
	I1202 16:17:23.700382  615191 default_sa.go:45] found service account: "default"
	I1202 16:17:23.700399  615191 default_sa.go:55] duration metric: took 2.466186ms for default service account to be created ...
	I1202 16:17:23.700407  615191 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 16:17:23.703139  615191 system_pods.go:86] 8 kube-system pods found
	I1202 16:17:23.703167  615191 system_pods.go:89] "coredns-66bc5c9577-f2vhx" [364e193c-f53a-4a43-b365-fe8364c3bd0f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 16:17:23.703178  615191 system_pods.go:89] "etcd-embed-certs-046271" [5b715b6b-8154-4ca8-9dc1-795be52cb8b2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 16:17:23.703189  615191 system_pods.go:89] "kindnet-wpj6k" [9249e8d2-e10c-4cae-bf04-cbf331109cf5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1202 16:17:23.703199  615191 system_pods.go:89] "kube-apiserver-embed-certs-046271" [f87f3619-f513-463f-bb69-acf168ec4ed0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 16:17:23.703214  615191 system_pods.go:89] "kube-controller-manager-embed-certs-046271" [bbdde76a-6098-496b-aaeb-2d61a714017a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 16:17:23.703227  615191 system_pods.go:89] "kube-proxy-q9pxb" [85574988-c836-4351-80bf-92683e782d91] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1202 16:17:23.703256  615191 system_pods.go:89] "kube-scheduler-embed-certs-046271" [d3b40c19-3363-443d-93f9-d2789b47d291] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 16:17:23.703268  615191 system_pods.go:89] "storage-provisioner" [5a625bd8-b8b8-4abc-b86a-d39218c7ffe3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 16:17:23.703278  615191 system_pods.go:126] duration metric: took 2.864031ms to wait for k8s-apps to be running ...
	I1202 16:17:23.703288  615191 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 16:17:23.703342  615191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 16:17:23.717127  615191 system_svc.go:56] duration metric: took 13.83377ms WaitForService to wait for kubelet
	I1202 16:17:23.717156  615191 kubeadm.go:587] duration metric: took 3.195075641s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 16:17:23.717179  615191 node_conditions.go:102] verifying NodePressure condition ...
	I1202 16:17:23.720108  615191 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 16:17:23.720130  615191 node_conditions.go:123] node cpu capacity is 8
	I1202 16:17:23.720143  615191 node_conditions.go:105] duration metric: took 2.959591ms to run NodePressure ...
	I1202 16:17:23.720159  615191 start.go:242] waiting for startup goroutines ...
	I1202 16:17:23.720169  615191 start.go:247] waiting for cluster config update ...
	I1202 16:17:23.720186  615191 start.go:256] writing updated cluster config ...
	I1202 16:17:23.720469  615191 ssh_runner.go:195] Run: rm -f paused
	I1202 16:17:23.724393  615191 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 16:17:23.728063  615191 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-f2vhx" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 16:17:25.734503  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	W1202 16:17:27.735550  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	W1202 16:17:23.807143  609654 pod_ready.go:104] pod "coredns-7d764666f9-fxl4s" is not "Ready", error: <nil>
	W1202 16:17:26.306569  609654 pod_ready.go:104] pod "coredns-7d764666f9-fxl4s" is not "Ready", error: <nil>
	W1202 16:17:28.307617  609654 pod_ready.go:104] pod "coredns-7d764666f9-fxl4s" is not "Ready", error: <nil>
	I1202 16:17:27.600994  617021 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 16:17:27.601027  617021 machine.go:97] duration metric: took 4.496275002s to provisionDockerMachine
	I1202 16:17:27.601043  617021 start.go:293] postStartSetup for "default-k8s-diff-port-806420" (driver="docker")
	I1202 16:17:27.601058  617021 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 16:17:27.601128  617021 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 16:17:27.601178  617021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:17:27.623246  617021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/default-k8s-diff-port-806420/id_rsa Username:docker}
	I1202 16:17:27.730663  617021 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 16:17:27.735877  617021 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 16:17:27.735907  617021 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 16:17:27.735918  617021 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-264555/.minikube/addons for local assets ...
	I1202 16:17:27.735966  617021 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-264555/.minikube/files for local assets ...
	I1202 16:17:27.736035  617021 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem -> 2680992.pem in /etc/ssl/certs
	I1202 16:17:27.736120  617021 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 16:17:27.745825  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem --> /etc/ssl/certs/2680992.pem (1708 bytes)
	I1202 16:17:27.768713  617021 start.go:296] duration metric: took 167.65018ms for postStartSetup
	I1202 16:17:27.768803  617021 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 16:17:27.768855  617021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:17:27.789992  617021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/default-k8s-diff-port-806420/id_rsa Username:docker}
	I1202 16:17:27.900148  617021 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 16:17:27.906371  617021 fix.go:56] duration metric: took 5.18902239s for fixHost
	I1202 16:17:27.906403  617021 start.go:83] releasing machines lock for "default-k8s-diff-port-806420", held for 5.189082645s
	I1202 16:17:27.906507  617021 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-806420
	I1202 16:17:27.929346  617021 ssh_runner.go:195] Run: cat /version.json
	I1202 16:17:27.929406  617021 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 16:17:27.929409  617021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:17:27.929492  617021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:17:27.952635  617021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/default-k8s-diff-port-806420/id_rsa Username:docker}
	I1202 16:17:27.954515  617021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/default-k8s-diff-port-806420/id_rsa Username:docker}
	I1202 16:17:28.138245  617021 ssh_runner.go:195] Run: systemctl --version
	I1202 16:17:28.147344  617021 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 16:17:28.198225  617021 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 16:17:28.204870  617021 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 16:17:28.204948  617021 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 16:17:28.216111  617021 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 16:17:28.216139  617021 start.go:496] detecting cgroup driver to use...
	I1202 16:17:28.216177  617021 detect.go:190] detected "systemd" cgroup driver on host os
	I1202 16:17:28.216233  617021 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 16:17:28.236312  617021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 16:17:28.253597  617021 docker.go:218] disabling cri-docker service (if available) ...
	I1202 16:17:28.253663  617021 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 16:17:28.274789  617021 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 16:17:28.292789  617021 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 16:17:28.400578  617021 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 16:17:28.502622  617021 docker.go:234] disabling docker service ...
	I1202 16:17:28.502709  617021 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 16:17:28.519863  617021 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 16:17:28.534627  617021 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 16:17:28.622884  617021 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 16:17:28.715766  617021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 16:17:28.728514  617021 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 16:17:28.743515  617021 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 16:17:28.743589  617021 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:28.752513  617021 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1202 16:17:28.752573  617021 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:28.761803  617021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:28.770820  617021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:28.779678  617021 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 16:17:28.788772  617021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:28.799817  617021 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:28.812207  617021 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:28.822959  617021 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 16:17:28.830615  617021 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 16:17:28.839315  617021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 16:17:28.935291  617021 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 16:17:29.312918  617021 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 16:17:29.312980  617021 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 16:17:29.316948  617021 start.go:564] Will wait 60s for crictl version
	I1202 16:17:29.316995  617021 ssh_runner.go:195] Run: which crictl
	I1202 16:17:29.320879  617021 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 16:17:29.346184  617021 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 16:17:29.346247  617021 ssh_runner.go:195] Run: crio --version
	I1202 16:17:29.374009  617021 ssh_runner.go:195] Run: crio --version
	I1202 16:17:29.405802  617021 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	W1202 16:17:27.010483  607516 pod_ready.go:104] pod "coredns-5dd5756b68-fsfh2" is not "Ready", error: <nil>
	I1202 16:17:29.009809  607516 pod_ready.go:94] pod "coredns-5dd5756b68-fsfh2" is "Ready"
	I1202 16:17:29.009836  607516 pod_ready.go:86] duration metric: took 38.00631225s for pod "coredns-5dd5756b68-fsfh2" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:29.012870  607516 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-380588" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:29.017277  607516 pod_ready.go:94] pod "etcd-old-k8s-version-380588" is "Ready"
	I1202 16:17:29.017298  607516 pod_ready.go:86] duration metric: took 4.40606ms for pod "etcd-old-k8s-version-380588" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:29.019970  607516 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-380588" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:29.023996  607516 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-380588" is "Ready"
	I1202 16:17:29.024017  607516 pod_ready.go:86] duration metric: took 4.027937ms for pod "kube-apiserver-old-k8s-version-380588" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:29.026488  607516 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-380588" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:29.207471  607516 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-380588" is "Ready"
	I1202 16:17:29.207497  607516 pod_ready.go:86] duration metric: took 180.991786ms for pod "kube-controller-manager-old-k8s-version-380588" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:29.408298  607516 pod_ready.go:83] waiting for pod "kube-proxy-jqstm" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:29.809129  607516 pod_ready.go:94] pod "kube-proxy-jqstm" is "Ready"
	I1202 16:17:29.809162  607516 pod_ready.go:86] duration metric: took 400.836367ms for pod "kube-proxy-jqstm" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:30.009989  607516 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-380588" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:30.408957  607516 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-380588" is "Ready"
	I1202 16:17:30.409044  607516 pod_ready.go:86] duration metric: took 399.025835ms for pod "kube-scheduler-old-k8s-version-380588" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:30.409070  607516 pod_ready.go:40] duration metric: took 39.411732547s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 16:17:30.482562  607516 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1202 16:17:30.484303  607516 out.go:203] 
	W1202 16:17:30.485747  607516 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1202 16:17:30.486932  607516 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1202 16:17:30.488134  607516 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-380588" cluster and "default" namespace by default
	I1202 16:17:29.407098  617021 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-806420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 16:17:29.424770  617021 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1202 16:17:29.429550  617021 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 16:17:29.439999  617021 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-806420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-806420 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 16:17:29.440104  617021 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 16:17:29.440140  617021 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 16:17:29.471019  617021 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 16:17:29.471045  617021 crio.go:433] Images already preloaded, skipping extraction
	I1202 16:17:29.471102  617021 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 16:17:29.496542  617021 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 16:17:29.496569  617021 cache_images.go:86] Images are preloaded, skipping loading
	I1202 16:17:29.496578  617021 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.2 crio true true} ...
	I1202 16:17:29.496701  617021 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-806420 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-806420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 16:17:29.496786  617021 ssh_runner.go:195] Run: crio config
	I1202 16:17:29.541566  617021 cni.go:84] Creating CNI manager for ""
	I1202 16:17:29.541586  617021 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 16:17:29.541596  617021 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 16:17:29.541616  617021 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-806420 NodeName:default-k8s-diff-port-806420 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 16:17:29.541728  617021 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-806420"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 16:17:29.541789  617021 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 16:17:29.550029  617021 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 16:17:29.550090  617021 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 16:17:29.558054  617021 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1202 16:17:29.571441  617021 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 16:17:29.584227  617021 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1202 16:17:29.597282  617021 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1202 16:17:29.601067  617021 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 16:17:29.611632  617021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 16:17:29.694704  617021 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 16:17:29.718170  617021 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420 for IP: 192.168.85.2
	I1202 16:17:29.718196  617021 certs.go:195] generating shared ca certs ...
	I1202 16:17:29.718216  617021 certs.go:227] acquiring lock for ca certs: {Name:mk039ff27816ff98157f54038cc23b17e408fc34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:17:29.718396  617021 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-264555/.minikube/ca.key
	I1202 16:17:29.718471  617021 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.key
	I1202 16:17:29.718486  617021 certs.go:257] generating profile certs ...
	I1202 16:17:29.718602  617021 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/client.key
	I1202 16:17:29.718693  617021 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/apiserver.key.20cb4091
	I1202 16:17:29.718752  617021 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/proxy-client.key
	I1202 16:17:29.718896  617021 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099.pem (1338 bytes)
	W1202 16:17:29.718940  617021 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099_empty.pem, impossibly tiny 0 bytes
	I1202 16:17:29.718953  617021 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem (1679 bytes)
	I1202 16:17:29.718990  617021 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem (1082 bytes)
	I1202 16:17:29.719023  617021 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem (1123 bytes)
	I1202 16:17:29.719054  617021 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem (1675 bytes)
	I1202 16:17:29.719109  617021 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem (1708 bytes)
	I1202 16:17:29.719924  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 16:17:29.741007  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 16:17:29.761350  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 16:17:29.780876  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 16:17:29.804308  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1202 16:17:29.825901  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 16:17:29.848908  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 16:17:29.867865  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 16:17:29.888652  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem --> /usr/share/ca-certificates/2680992.pem (1708 bytes)
	I1202 16:17:29.910779  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 16:17:29.932582  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099.pem --> /usr/share/ca-certificates/268099.pem (1338 bytes)
	I1202 16:17:29.956561  617021 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 16:17:29.972696  617021 ssh_runner.go:195] Run: openssl version
	I1202 16:17:29.980524  617021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2680992.pem && ln -fs /usr/share/ca-certificates/2680992.pem /etc/ssl/certs/2680992.pem"
	I1202 16:17:29.991411  617021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2680992.pem
	I1202 16:17:29.996151  617021 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 15:33 /usr/share/ca-certificates/2680992.pem
	I1202 16:17:29.996212  617021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2680992.pem
	I1202 16:17:30.050503  617021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2680992.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 16:17:30.061483  617021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 16:17:30.072491  617021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:17:30.077665  617021 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 15:16 /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:17:30.077718  617021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:17:30.129682  617021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 16:17:30.140657  617021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/268099.pem && ln -fs /usr/share/ca-certificates/268099.pem /etc/ssl/certs/268099.pem"
	I1202 16:17:30.152273  617021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/268099.pem
	I1202 16:17:30.157239  617021 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 15:33 /usr/share/ca-certificates/268099.pem
	I1202 16:17:30.157304  617021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/268099.pem
	I1202 16:17:30.211554  617021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/268099.pem /etc/ssl/certs/51391683.0"
	I1202 16:17:30.223094  617021 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 16:17:30.228304  617021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 16:17:30.285622  617021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 16:17:30.343619  617021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 16:17:30.405618  617021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 16:17:30.470279  617021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 16:17:30.533815  617021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 16:17:30.599554  617021 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-806420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-806420 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 16:17:30.599678  617021 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 16:17:30.599735  617021 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 16:17:30.654880  617021 cri.go:89] found id: "dd7adc25ca0d8fd13c03d582eb1846e44e7ca31363dd13737dfcd8541ae71f4a"
	I1202 16:17:30.654952  617021 cri.go:89] found id: "85a4f9f063a689e0c01b71338ce33ac27c1c4ef5a601031762f5f6f8468c7949"
	I1202 16:17:30.654958  617021 cri.go:89] found id: "fa204ce25b4b750a274bec528d833933338cbebe536dd59bd13e8ef6cec0cb00"
	I1202 16:17:30.654963  617021 cri.go:89] found id: "e986fe28a3e21e60cd56299b5d31eb8159c847908a86b5e9049cff20903959aa"
	I1202 16:17:30.654967  617021 cri.go:89] found id: ""
	I1202 16:17:30.655019  617021 ssh_runner.go:195] Run: sudo runc list -f json
	W1202 16:17:30.673871  617021 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T16:17:30Z" level=error msg="open /run/runc: no such file or directory"
	I1202 16:17:30.673941  617021 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 16:17:30.686769  617021 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 16:17:30.686797  617021 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 16:17:30.686844  617021 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 16:17:30.699192  617021 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 16:17:30.701520  617021 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-806420" does not appear in /home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 16:17:30.702957  617021 kubeconfig.go:62] /home/jenkins/minikube-integration/22021-264555/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-806420" cluster setting kubeconfig missing "default-k8s-diff-port-806420" context setting]
	I1202 16:17:30.704478  617021 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/kubeconfig: {Name:mk809d3f43352510256b48d000241cc8ee13f80d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:17:30.707218  617021 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 16:17:30.719927  617021 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1202 16:17:30.720026  617021 kubeadm.go:602] duration metric: took 33.222622ms to restartPrimaryControlPlane
	I1202 16:17:30.720048  617021 kubeadm.go:403] duration metric: took 120.509203ms to StartCluster
	I1202 16:17:30.720091  617021 settings.go:142] acquiring lock: {Name:mkb00b5395affa5a80ee09f21cfed53b1afcd59c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:17:30.720179  617021 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 16:17:30.723308  617021 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/kubeconfig: {Name:mk809d3f43352510256b48d000241cc8ee13f80d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:17:30.723718  617021 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 16:17:30.724045  617021 config.go:182] Loaded profile config "default-k8s-diff-port-806420": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 16:17:30.724081  617021 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 16:17:30.724157  617021 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-806420"
	I1202 16:17:30.724174  617021 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-806420"
	W1202 16:17:30.724182  617021 addons.go:248] addon storage-provisioner should already be in state true
	I1202 16:17:30.724203  617021 host.go:66] Checking if "default-k8s-diff-port-806420" exists ...
	I1202 16:17:30.724727  617021 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-806420 --format={{.State.Status}}
	I1202 16:17:30.724888  617021 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-806420"
	I1202 16:17:30.724906  617021 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-806420"
	W1202 16:17:30.724915  617021 addons.go:248] addon dashboard should already be in state true
	I1202 16:17:30.724912  617021 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-806420"
	I1202 16:17:30.724939  617021 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-806420"
	I1202 16:17:30.724944  617021 host.go:66] Checking if "default-k8s-diff-port-806420" exists ...
	I1202 16:17:30.725432  617021 out.go:179] * Verifying Kubernetes components...
	I1202 16:17:30.725507  617021 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-806420 --format={{.State.Status}}
	I1202 16:17:30.725453  617021 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-806420 --format={{.State.Status}}
	I1202 16:17:30.730554  617021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 16:17:30.764253  617021 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 16:17:30.765559  617021 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1202 16:17:30.765563  617021 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 16:17:30.765773  617021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 16:17:30.765913  617021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:17:30.771476  617021 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1202 16:17:30.772748  617021 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1202 16:17:30.772772  617021 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1202 16:17:30.772833  617021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:17:30.774089  617021 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-806420"
	W1202 16:17:30.774153  617021 addons.go:248] addon default-storageclass should already be in state true
	I1202 16:17:30.774196  617021 host.go:66] Checking if "default-k8s-diff-port-806420" exists ...
	I1202 16:17:30.774739  617021 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-806420 --format={{.State.Status}}
	I1202 16:17:30.805290  617021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/default-k8s-diff-port-806420/id_rsa Username:docker}
	I1202 16:17:30.815719  617021 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 16:17:30.815744  617021 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 16:17:30.815803  617021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:17:30.818534  617021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/default-k8s-diff-port-806420/id_rsa Username:docker}
	I1202 16:17:30.847757  617021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/default-k8s-diff-port-806420/id_rsa Username:docker}
	I1202 16:17:30.983053  617021 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 16:17:31.006025  617021 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-806420" to be "Ready" ...
	I1202 16:17:31.015709  617021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 16:17:31.044129  617021 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1202 16:17:31.044161  617021 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1202 16:17:31.080007  617021 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1202 16:17:31.080035  617021 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1202 16:17:31.089152  617021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 16:17:31.105968  617021 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1202 16:17:31.105999  617021 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1202 16:17:31.125794  617021 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1202 16:17:31.125819  617021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1202 16:17:31.146432  617021 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1202 16:17:31.146461  617021 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1202 16:17:31.166977  617021 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1202 16:17:31.167010  617021 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1202 16:17:31.185493  617021 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1202 16:17:31.185536  617021 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1202 16:17:31.204002  617021 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1202 16:17:31.204034  617021 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1202 16:17:31.223408  617021 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 16:17:31.223455  617021 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1202 16:17:31.243155  617021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 16:17:30.312353  609654 pod_ready.go:104] pod "coredns-7d764666f9-fxl4s" is not "Ready", error: <nil>
	I1202 16:17:31.311117  609654 pod_ready.go:94] pod "coredns-7d764666f9-fxl4s" is "Ready"
	I1202 16:17:31.311148  609654 pod_ready.go:86] duration metric: took 32.51010024s for pod "coredns-7d764666f9-fxl4s" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:31.314691  609654 pod_ready.go:83] waiting for pod "etcd-no-preload-534842" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:31.321620  609654 pod_ready.go:94] pod "etcd-no-preload-534842" is "Ready"
	I1202 16:17:31.321651  609654 pod_ready.go:86] duration metric: took 6.872089ms for pod "etcd-no-preload-534842" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:31.324914  609654 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-534842" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:31.330629  609654 pod_ready.go:94] pod "kube-apiserver-no-preload-534842" is "Ready"
	I1202 16:17:31.330663  609654 pod_ready.go:86] duration metric: took 5.720105ms for pod "kube-apiserver-no-preload-534842" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:31.333806  609654 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-534842" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:31.505747  609654 pod_ready.go:94] pod "kube-controller-manager-no-preload-534842" is "Ready"
	I1202 16:17:31.505784  609654 pod_ready.go:86] duration metric: took 171.955168ms for pod "kube-controller-manager-no-preload-534842" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:31.705911  609654 pod_ready.go:83] waiting for pod "kube-proxy-xqnrx" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:32.105456  609654 pod_ready.go:94] pod "kube-proxy-xqnrx" is "Ready"
	I1202 16:17:32.105487  609654 pod_ready.go:86] duration metric: took 399.544466ms for pod "kube-proxy-xqnrx" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:32.306457  609654 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-534842" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:32.705260  609654 pod_ready.go:94] pod "kube-scheduler-no-preload-534842" is "Ready"
	I1202 16:17:32.705298  609654 pod_ready.go:86] duration metric: took 398.794846ms for pod "kube-scheduler-no-preload-534842" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:32.705317  609654 pod_ready.go:40] duration metric: took 33.908136514s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 16:17:32.783728  609654 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1202 16:17:32.787599  609654 out.go:179] * Done! kubectl is now configured to use "no-preload-534842" cluster and "default" namespace by default
	W1202 16:17:30.238599  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	W1202 16:17:32.744223  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	I1202 16:17:32.896889  617021 node_ready.go:49] node "default-k8s-diff-port-806420" is "Ready"
	I1202 16:17:32.896991  617021 node_ready.go:38] duration metric: took 1.890924168s for node "default-k8s-diff-port-806420" to be "Ready" ...
	I1202 16:17:32.897022  617021 api_server.go:52] waiting for apiserver process to appear ...
	I1202 16:17:32.897106  617021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 16:17:33.630628  617021 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.614880216s)
	I1202 16:17:33.630702  617021 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.541520167s)
	I1202 16:17:33.630841  617021 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.387645959s)
	I1202 16:17:33.630867  617021 api_server.go:72] duration metric: took 2.907113913s to wait for apiserver process to appear ...
	I1202 16:17:33.630880  617021 api_server.go:88] waiting for apiserver healthz status ...
	I1202 16:17:33.630901  617021 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1202 16:17:33.633116  617021 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-806420 addons enable metrics-server
	
	I1202 16:17:33.635678  617021 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 16:17:33.635702  617021 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 16:17:33.639966  617021 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1202 16:17:33.641004  617021 addons.go:530] duration metric: took 2.916912715s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1202 16:17:34.131947  617021 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1202 16:17:34.137470  617021 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1202 16:17:34.138943  617021 api_server.go:141] control plane version: v1.34.2
	I1202 16:17:34.139019  617021 api_server.go:131] duration metric: took 508.129517ms to wait for apiserver health ...
	I1202 16:17:34.139043  617021 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 16:17:34.144346  617021 system_pods.go:59] 8 kube-system pods found
	I1202 16:17:34.144412  617021 system_pods.go:61] "coredns-66bc5c9577-6h6nr" [7c832d8c-99dc-4663-a386-c48abaf9335e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 16:17:34.144438  617021 system_pods.go:61] "etcd-default-k8s-diff-port-806420" [e47c28bd-c4ac-417c-92e4-2ed52662c35b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 16:17:34.144453  617021 system_pods.go:61] "kindnet-pc8st" [17b96563-2832-47ee-9d04-8e27db1a3c6b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1202 16:17:34.144461  617021 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-806420" [44c28fe6-dea2-4f64-989d-d69480bc7988] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 16:17:34.144472  617021 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-806420" [6e6342da-debb-4021-8cb1-adec092a866a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 16:17:34.144482  617021 system_pods.go:61] "kube-proxy-574km" [3766b4e1-7e00-4229-99a3-9eec486a3437] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1202 16:17:34.144495  617021 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-806420" [14951142-9cb5-4cf8-a095-d45123ec49da] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 16:17:34.144502  617021 system_pods.go:61] "storage-provisioner" [b3d4301c-a3b1-4c90-bb80-045b48b75011] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 16:17:34.144515  617021 system_pods.go:74] duration metric: took 5.454658ms to wait for pod list to return data ...
	I1202 16:17:34.144526  617021 default_sa.go:34] waiting for default service account to be created ...
	I1202 16:17:34.147568  617021 default_sa.go:45] found service account: "default"
	I1202 16:17:34.147593  617021 default_sa.go:55] duration metric: took 3.053699ms for default service account to be created ...
	I1202 16:17:34.147604  617021 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 16:17:34.151209  617021 system_pods.go:86] 8 kube-system pods found
	I1202 16:17:34.151246  617021 system_pods.go:89] "coredns-66bc5c9577-6h6nr" [7c832d8c-99dc-4663-a386-c48abaf9335e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 16:17:34.151258  617021 system_pods.go:89] "etcd-default-k8s-diff-port-806420" [e47c28bd-c4ac-417c-92e4-2ed52662c35b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 16:17:34.151270  617021 system_pods.go:89] "kindnet-pc8st" [17b96563-2832-47ee-9d04-8e27db1a3c6b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1202 16:17:34.151280  617021 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-806420" [44c28fe6-dea2-4f64-989d-d69480bc7988] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 16:17:34.151291  617021 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-806420" [6e6342da-debb-4021-8cb1-adec092a866a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 16:17:34.151299  617021 system_pods.go:89] "kube-proxy-574km" [3766b4e1-7e00-4229-99a3-9eec486a3437] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1202 16:17:34.151307  617021 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-806420" [14951142-9cb5-4cf8-a095-d45123ec49da] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 16:17:34.151315  617021 system_pods.go:89] "storage-provisioner" [b3d4301c-a3b1-4c90-bb80-045b48b75011] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 16:17:34.151325  617021 system_pods.go:126] duration metric: took 3.713746ms to wait for k8s-apps to be running ...
	I1202 16:17:34.151335  617021 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 16:17:34.151394  617021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 16:17:34.170938  617021 system_svc.go:56] duration metric: took 19.587588ms WaitForService to wait for kubelet
	I1202 16:17:34.170990  617021 kubeadm.go:587] duration metric: took 3.447228899s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 16:17:34.171017  617021 node_conditions.go:102] verifying NodePressure condition ...
	I1202 16:17:34.176230  617021 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 16:17:34.176264  617021 node_conditions.go:123] node cpu capacity is 8
	I1202 16:17:34.176284  617021 node_conditions.go:105] duration metric: took 5.260608ms to run NodePressure ...
	I1202 16:17:34.176300  617021 start.go:242] waiting for startup goroutines ...
	I1202 16:17:34.176309  617021 start.go:247] waiting for cluster config update ...
	I1202 16:17:34.176324  617021 start.go:256] writing updated cluster config ...
	I1202 16:17:34.176722  617021 ssh_runner.go:195] Run: rm -f paused
	I1202 16:17:34.181758  617021 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 16:17:34.185626  617021 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6h6nr" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 16:17:36.191101  617021 pod_ready.go:104] pod "coredns-66bc5c9577-6h6nr" is not "Ready", error: <nil>
	W1202 16:17:35.233349  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	W1202 16:17:37.234098  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	W1202 16:17:38.191695  617021 pod_ready.go:104] pod "coredns-66bc5c9577-6h6nr" is not "Ready", error: <nil>
	W1202 16:17:40.691815  617021 pod_ready.go:104] pod "coredns-66bc5c9577-6h6nr" is not "Ready", error: <nil>
	W1202 16:17:39.234621  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	W1202 16:17:41.734966  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 02 16:17:05 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:05.792509902Z" level=info msg="Starting container: 0288738cc8f7e5cbdd45b38523d3baad90888917cf8b2f4a56299f3138f1402b" id=3cc60342-60e5-4753-b586-9695fb175aaa name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 16:17:05 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:05.794557867Z" level=info msg="Started container" PID=1688 containerID=0288738cc8f7e5cbdd45b38523d3baad90888917cf8b2f4a56299f3138f1402b description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ftcrw/dashboard-metrics-scraper id=3cc60342-60e5-4753-b586-9695fb175aaa name=/runtime.v1.RuntimeService/StartContainer sandboxID=6992d4bf68678e6a29c4fbd4779bf788d9c80019221c348fc2578d610b220473
	Dec 02 16:17:06 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:06.755286813Z" level=info msg="Removing container: b98c9ddfa802dc6861934733f397432cdc14a2533e759114af75ba66b479bee7" id=49ad2dc2-9a68-4222-9576-cea0184cce7a name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 16:17:06 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:06.812773366Z" level=info msg="Removed container b98c9ddfa802dc6861934733f397432cdc14a2533e759114af75ba66b479bee7: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ftcrw/dashboard-metrics-scraper" id=49ad2dc2-9a68-4222-9576-cea0184cce7a name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 16:17:09 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:09.133477063Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029" id=92627f50-55a1-4b07-91a4-6689e38cba82 name=/runtime.v1.ImageService/PullImage
	Dec 02 16:17:09 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:09.134408537Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=a23be95d-dd41-4060-8fe7-d3a9a9522e98 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:17:09 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:09.136585873Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mwmcm/kubernetes-dashboard" id=ac7297ae-ca7f-40e3-b190-35b0aaeef93f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:17:09 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:09.136701476Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:17:09 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:09.140998019Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:17:09 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:09.141161872Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/d4ea7881034e0780c992068845154e4f2dd9041c0667bd3b4939b634708093e2/merged/etc/group: no such file or directory"
	Dec 02 16:17:09 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:09.141474823Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:17:09 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:09.170090805Z" level=info msg="Created container d3b98e3ce5ef5315549e5e82069de566f733711b1c003f5dcf7e0fd0f2108a47: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mwmcm/kubernetes-dashboard" id=ac7297ae-ca7f-40e3-b190-35b0aaeef93f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:17:09 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:09.170734073Z" level=info msg="Starting container: d3b98e3ce5ef5315549e5e82069de566f733711b1c003f5dcf7e0fd0f2108a47" id=c8b5d0bb-59e4-4e38-8919-a1e7dac1532c name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 16:17:09 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:09.172847834Z" level=info msg="Started container" PID=1738 containerID=d3b98e3ce5ef5315549e5e82069de566f733711b1c003f5dcf7e0fd0f2108a47 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mwmcm/kubernetes-dashboard id=c8b5d0bb-59e4-4e38-8919-a1e7dac1532c name=/runtime.v1.RuntimeService/StartContainer sandboxID=63e9fe12c2a794c8689789c2d2ff7f886c1d44c03408fe9fab0b62775cac1873
	Dec 02 16:17:27 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:27.664485268Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=acfd8df0-b243-4ae1-9570-5ea67fbb52da name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:17:27 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:27.665473018Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=71541a8a-fa26-47c4-827c-071700a6b39e name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:17:27 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:27.66667516Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ftcrw/dashboard-metrics-scraper" id=47a652d3-2452-4d5d-ae2f-f73b13e44d87 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:17:27 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:27.666815043Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:17:27 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:27.674503646Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:17:27 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:27.675173068Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:17:27 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:27.703559236Z" level=info msg="Created container 5fe90db48aae47a0930bb877fdf9445c413630f895e40b0fe3908389dd557346: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ftcrw/dashboard-metrics-scraper" id=47a652d3-2452-4d5d-ae2f-f73b13e44d87 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:17:27 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:27.704191775Z" level=info msg="Starting container: 5fe90db48aae47a0930bb877fdf9445c413630f895e40b0fe3908389dd557346" id=c602caaa-a8d7-4fdd-a39f-333baad66eb0 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 16:17:27 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:27.705940863Z" level=info msg="Started container" PID=1759 containerID=5fe90db48aae47a0930bb877fdf9445c413630f895e40b0fe3908389dd557346 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ftcrw/dashboard-metrics-scraper id=c602caaa-a8d7-4fdd-a39f-333baad66eb0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6992d4bf68678e6a29c4fbd4779bf788d9c80019221c348fc2578d610b220473
	Dec 02 16:17:27 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:27.802146876Z" level=info msg="Removing container: 0288738cc8f7e5cbdd45b38523d3baad90888917cf8b2f4a56299f3138f1402b" id=bae02b25-5d07-42f1-8e9f-f9afae2bba90 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 16:17:27 old-k8s-version-380588 crio[567]: time="2025-12-02T16:17:27.815760022Z" level=info msg="Removed container 0288738cc8f7e5cbdd45b38523d3baad90888917cf8b2f4a56299f3138f1402b: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ftcrw/dashboard-metrics-scraper" id=bae02b25-5d07-42f1-8e9f-f9afae2bba90 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	5fe90db48aae4       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago       Exited              dashboard-metrics-scraper   2                   6992d4bf68678       dashboard-metrics-scraper-5f989dc9cf-ftcrw       kubernetes-dashboard
	d3b98e3ce5ef5       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   38 seconds ago       Running             kubernetes-dashboard        0                   63e9fe12c2a79       kubernetes-dashboard-8694d4445c-mwmcm            kubernetes-dashboard
	c325d39d2a5d6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           57 seconds ago       Running             storage-provisioner         1                   6da9d5a9a1f50       storage-provisioner                              kube-system
	4a067bbe9ef75       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           58 seconds ago       Running             busybox                     1                   8e27b39923567       busybox                                          default
	e4ec4ba515fab       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           58 seconds ago       Running             coredns                     0                   aa4f0a85a5631       coredns-5dd5756b68-fsfh2                         kube-system
	f5fa23473c235       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           58 seconds ago       Running             kube-proxy                  0                   b9c1cc909e5ff       kube-proxy-jqstm                                 kube-system
	19a923b6f740b       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           58 seconds ago       Running             kindnet-cni                 0                   f3b2d1797efc9       kindnet-cd4m6                                    kube-system
	a8f293ec5a85a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           58 seconds ago       Exited              storage-provisioner         0                   6da9d5a9a1f50       storage-provisioner                              kube-system
	7f110a0363a9a       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           About a minute ago   Running             kube-apiserver              0                   83400cd0d7133       kube-apiserver-old-k8s-version-380588            kube-system
	6dfca71f4fbfd       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           About a minute ago   Running             etcd                        0                   af60a00965510       etcd-old-k8s-version-380588                      kube-system
	4d3bf69c2ebc8       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           About a minute ago   Running             kube-controller-manager     0                   3550e91549fb0       kube-controller-manager-old-k8s-version-380588   kube-system
	6375989d50737       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           About a minute ago   Running             kube-scheduler              0                   c033ae3c0f490       kube-scheduler-old-k8s-version-380588            kube-system
	
	
	==> coredns [e4ec4ba515fabd2712fb6c47a42ae38d829c32a9f5d6d7e8f7b2fff79861fe50] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:51465 - 51879 "HINFO IN 6395565171171798244.7121814217675539279. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.024281714s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-380588
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-380588
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=old-k8s-version-380588
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T16_15_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 16:15:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-380588
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 16:17:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 16:17:20 +0000   Tue, 02 Dec 2025 16:15:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 16:17:20 +0000   Tue, 02 Dec 2025 16:15:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 16:17:20 +0000   Tue, 02 Dec 2025 16:15:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 16:17:20 +0000   Tue, 02 Dec 2025 16:16:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-380588
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                c883883f-eefb-4ccc-83df-e6ee2918146f
	  Boot ID:                    e00bac56-b076-4861-bc22-5d3b11269f73
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-5dd5756b68-fsfh2                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     113s
	  kube-system                 etcd-old-k8s-version-380588                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m6s
	  kube-system                 kindnet-cd4m6                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      113s
	  kube-system                 kube-apiserver-old-k8s-version-380588             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 kube-controller-manager-old-k8s-version-380588    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-proxy-jqstm                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-scheduler-old-k8s-version-380588             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-ftcrw        0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-mwmcm             0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 112s               kube-proxy       
	  Normal  Starting                 58s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m6s               kubelet          Node old-k8s-version-380588 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m6s               kubelet          Node old-k8s-version-380588 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m6s               kubelet          Node old-k8s-version-380588 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m6s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           113s               node-controller  Node old-k8s-version-380588 event: Registered Node old-k8s-version-380588 in Controller
	  Normal  NodeReady                99s                kubelet          Node old-k8s-version-380588 status is now: NodeReady
	  Normal  Starting                 62s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  62s (x8 over 62s)  kubelet          Node old-k8s-version-380588 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x8 over 62s)  kubelet          Node old-k8s-version-380588 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x8 over 62s)  kubelet          Node old-k8s-version-380588 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           47s                node-controller  Node old-k8s-version-380588 event: Registered Node old-k8s-version-380588 in Controller
	
	
	==> dmesg <==
	[  +0.000023] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[Dec 2 16:14] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ca bc 15 8e 4f 39 08 06
	[  +0.202375] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4a 25 86 21 45 76 08 06
	[  +7.441346] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 50 97 74 77 f9 08 06
	[  +0.000311] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 8c 8a 4d de f7 08 06
	[Dec 2 16:15] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 87 56 d2 46 1b 08 06
	[  +0.000909] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4a 25 86 21 45 76 08 06
	[  +7.449328] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a 06 ef 04 0a 22 08 06
	[ +17.731920] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff ae 8e 5c 48 83 60 08 06
	[  +2.165442] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0e 0b db fb 54 af 08 06
	[  +0.000320] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 3a 06 ef 04 0a 22 08 06
	[ +14.651928] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 5d 2d 15 78 ec 08 06
	[  +0.000385] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 8e 5c 48 83 60 08 06
	
	
	==> etcd [6dfca71f4fbfde27fe7499c7118ecfb2f1add3481dc2e404f53badeec3d76a83] <==
	{"level":"info","ts":"2025-12-02T16:16:47.218571Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-02T16:16:47.218583Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-02T16:16:47.2189Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2025-12-02T16:16:47.219049Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2025-12-02T16:16:47.21918Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-02T16:16:47.21922Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-02T16:16:47.222224Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-02T16:16:47.22261Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-02T16:16:47.222657Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-02T16:16:47.222737Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-02T16:16:47.222757Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-02T16:16:48.210738Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-02T16:16:48.210802Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-02T16:16:48.210844Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-02T16:16:48.210867Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-12-02T16:16:48.210874Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-02T16:16:48.210907Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-12-02T16:16:48.210922Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-02T16:16:48.212496Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-380588 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-02T16:16:48.212504Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-02T16:16:48.212516Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-02T16:16:48.21272Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-02T16:16:48.212744Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-02T16:16:48.213871Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-02T16:16:48.213969Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	
	
	==> kernel <==
	 16:17:48 up  3:00,  0 user,  load average: 4.47, 4.19, 2.73
	Linux old-k8s-version-380588 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [19a923b6f740b6e9edc34def00b3c0200695a3c12243306e18b73e7cba12f465] <==
	I1202 16:16:50.322766       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 16:16:50.323076       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1202 16:16:50.323288       1 main.go:148] setting mtu 1500 for CNI 
	I1202 16:16:50.323311       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 16:16:50.323352       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T16:16:50Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 16:16:50.618403       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 16:16:50.618475       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 16:16:50.618489       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 16:16:50.618805       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1202 16:16:51.119164       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 16:16:51.119198       1 metrics.go:72] Registering metrics
	I1202 16:16:51.119294       1 controller.go:711] "Syncing nftables rules"
	I1202 16:17:00.619284       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1202 16:17:00.619345       1 main.go:301] handling current node
	I1202 16:17:10.619200       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1202 16:17:10.619254       1 main.go:301] handling current node
	I1202 16:17:20.618610       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1202 16:17:20.618658       1 main.go:301] handling current node
	I1202 16:17:30.618644       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1202 16:17:30.618694       1 main.go:301] handling current node
	I1202 16:17:40.618953       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1202 16:17:40.619020       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7f110a0363a9a5cf52f114e4eeb59c098716f360f4be3437bb75f0e0ddf16391] <==
	I1202 16:16:49.378169       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 16:16:49.397584       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1202 16:16:49.411782       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1202 16:16:49.411893       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1202 16:16:49.411955       1 shared_informer.go:318] Caches are synced for configmaps
	I1202 16:16:49.412337       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1202 16:16:49.413059       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1202 16:16:49.413087       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1202 16:16:49.413140       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1202 16:16:49.413305       1 aggregator.go:166] initial CRD sync complete...
	I1202 16:16:49.413386       1 autoregister_controller.go:141] Starting autoregister controller
	I1202 16:16:49.413462       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1202 16:16:49.413503       1 cache.go:39] Caches are synced for autoregister controller
	E1202 16:16:49.424609       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1202 16:16:50.284626       1 controller.go:624] quota admission added evaluator for: namespaces
	I1202 16:16:50.317730       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1202 16:16:50.322328       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1202 16:16:50.355117       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 16:16:50.366216       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 16:16:50.378757       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1202 16:16:50.428465       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.78.111"}
	I1202 16:16:50.443764       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.210.90"}
	I1202 16:17:01.623633       1 controller.go:624] quota admission added evaluator for: endpoints
	I1202 16:17:01.697832       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1202 16:17:01.770710       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [4d3bf69c2ebc82ed7ac27121eb8894a9b4b6447e5a562f1e350b6d588d0ad01e] <==
	I1202 16:17:01.749355       1 range_allocator.go:178] "Starting range CIDR allocator"
	I1202 16:17:01.749401       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I1202 16:17:01.749414       1 shared_informer.go:318] Caches are synced for cidrallocator
	I1202 16:17:01.752168       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="86.67µs"
	I1202 16:17:01.755748       1 shared_informer.go:318] Caches are synced for daemon sets
	I1202 16:17:01.761103       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1202 16:17:01.770630       1 shared_informer.go:318] Caches are synced for GC
	I1202 16:17:01.789671       1 shared_informer.go:318] Caches are synced for TTL
	I1202 16:17:01.799161       1 shared_informer.go:318] Caches are synced for resource quota
	I1202 16:17:01.806734       1 shared_informer.go:318] Caches are synced for resource quota
	I1202 16:17:01.857036       1 shared_informer.go:318] Caches are synced for persistent volume
	I1202 16:17:01.859402       1 shared_informer.go:318] Caches are synced for PV protection
	I1202 16:17:01.893941       1 shared_informer.go:318] Caches are synced for attach detach
	I1202 16:17:02.216622       1 shared_informer.go:318] Caches are synced for garbage collector
	I1202 16:17:02.219837       1 shared_informer.go:318] Caches are synced for garbage collector
	I1202 16:17:02.219870       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1202 16:17:05.752643       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="141.09µs"
	I1202 16:17:06.760131       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="132.2µs"
	I1202 16:17:07.765361       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="66.743µs"
	I1202 16:17:09.772480       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="6.92858ms"
	I1202 16:17:09.772731       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="63.16µs"
	I1202 16:17:27.817118       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="64.627µs"
	I1202 16:17:28.805651       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.514539ms"
	I1202 16:17:28.805803       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="91.511µs"
	I1202 16:17:32.034791       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="77.347µs"
	
	
	==> kube-proxy [f5fa23473c23570bed3b8cae515e1d47152a8bbcc1d833bbb220c14786e91242] <==
	I1202 16:16:50.091017       1 server_others.go:69] "Using iptables proxy"
	I1202 16:16:50.102855       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1202 16:16:50.121772       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 16:16:50.124136       1 server_others.go:152] "Using iptables Proxier"
	I1202 16:16:50.124175       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1202 16:16:50.124182       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1202 16:16:50.124207       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1202 16:16:50.124442       1 server.go:846] "Version info" version="v1.28.0"
	I1202 16:16:50.124461       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 16:16:50.125227       1 config.go:97] "Starting endpoint slice config controller"
	I1202 16:16:50.125258       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1202 16:16:50.125285       1 config.go:188] "Starting service config controller"
	I1202 16:16:50.125288       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1202 16:16:50.126603       1 config.go:315] "Starting node config controller"
	I1202 16:16:50.126636       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1202 16:16:50.225796       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1202 16:16:50.225796       1 shared_informer.go:318] Caches are synced for service config
	I1202 16:16:50.227165       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [6375989d507379cc812257f2c9f777cb49645b84e1445f665882a8f604b996ac] <==
	I1202 16:16:47.808108       1 serving.go:348] Generated self-signed cert in-memory
	I1202 16:16:49.398101       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1202 16:16:49.398134       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 16:16:49.402985       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1202 16:16:49.402999       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1202 16:16:49.403018       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 16:16:49.403023       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1202 16:16:49.403033       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1202 16:16:49.403022       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1202 16:16:49.403949       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1202 16:16:49.404019       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1202 16:16:49.503583       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1202 16:16:49.503585       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1202 16:16:49.503593       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	
	
	==> kubelet <==
	Dec 02 16:17:01 old-k8s-version-380588 kubelet[733]: I1202 16:17:01.716997     733 topology_manager.go:215] "Topology Admit Handler" podUID="8ad75430-6092-4b71-92ab-1041a127ac88" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-ftcrw"
	Dec 02 16:17:01 old-k8s-version-380588 kubelet[733]: I1202 16:17:01.718873     733 topology_manager.go:215] "Topology Admit Handler" podUID="4a0441b6-699b-4b02-a86a-76b28b735c51" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-mwmcm"
	Dec 02 16:17:01 old-k8s-version-380588 kubelet[733]: I1202 16:17:01.735808     733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8ad75430-6092-4b71-92ab-1041a127ac88-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-ftcrw\" (UID: \"8ad75430-6092-4b71-92ab-1041a127ac88\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ftcrw"
	Dec 02 16:17:01 old-k8s-version-380588 kubelet[733]: I1202 16:17:01.735866     733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4a0441b6-699b-4b02-a86a-76b28b735c51-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-mwmcm\" (UID: \"4a0441b6-699b-4b02-a86a-76b28b735c51\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mwmcm"
	Dec 02 16:17:01 old-k8s-version-380588 kubelet[733]: I1202 16:17:01.735926     733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cmc6\" (UniqueName: \"kubernetes.io/projected/8ad75430-6092-4b71-92ab-1041a127ac88-kube-api-access-6cmc6\") pod \"dashboard-metrics-scraper-5f989dc9cf-ftcrw\" (UID: \"8ad75430-6092-4b71-92ab-1041a127ac88\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ftcrw"
	Dec 02 16:17:01 old-k8s-version-380588 kubelet[733]: I1202 16:17:01.735962     733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bzbf\" (UniqueName: \"kubernetes.io/projected/4a0441b6-699b-4b02-a86a-76b28b735c51-kube-api-access-8bzbf\") pod \"kubernetes-dashboard-8694d4445c-mwmcm\" (UID: \"4a0441b6-699b-4b02-a86a-76b28b735c51\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mwmcm"
	Dec 02 16:17:05 old-k8s-version-380588 kubelet[733]: I1202 16:17:05.739943     733 scope.go:117] "RemoveContainer" containerID="b98c9ddfa802dc6861934733f397432cdc14a2533e759114af75ba66b479bee7"
	Dec 02 16:17:06 old-k8s-version-380588 kubelet[733]: I1202 16:17:06.743933     733 scope.go:117] "RemoveContainer" containerID="b98c9ddfa802dc6861934733f397432cdc14a2533e759114af75ba66b479bee7"
	Dec 02 16:17:06 old-k8s-version-380588 kubelet[733]: I1202 16:17:06.744253     733 scope.go:117] "RemoveContainer" containerID="0288738cc8f7e5cbdd45b38523d3baad90888917cf8b2f4a56299f3138f1402b"
	Dec 02 16:17:06 old-k8s-version-380588 kubelet[733]: E1202 16:17:06.744639     733 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-ftcrw_kubernetes-dashboard(8ad75430-6092-4b71-92ab-1041a127ac88)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ftcrw" podUID="8ad75430-6092-4b71-92ab-1041a127ac88"
	Dec 02 16:17:07 old-k8s-version-380588 kubelet[733]: I1202 16:17:07.748764     733 scope.go:117] "RemoveContainer" containerID="0288738cc8f7e5cbdd45b38523d3baad90888917cf8b2f4a56299f3138f1402b"
	Dec 02 16:17:07 old-k8s-version-380588 kubelet[733]: E1202 16:17:07.749162     733 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-ftcrw_kubernetes-dashboard(8ad75430-6092-4b71-92ab-1041a127ac88)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ftcrw" podUID="8ad75430-6092-4b71-92ab-1041a127ac88"
	Dec 02 16:17:09 old-k8s-version-380588 kubelet[733]: I1202 16:17:09.765663     733 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mwmcm" podStartSLOduration=1.6761012210000001 podCreationTimestamp="2025-12-02 16:17:01 +0000 UTC" firstStartedPulling="2025-12-02 16:17:02.04428735 +0000 UTC m=+15.479999508" lastFinishedPulling="2025-12-02 16:17:09.133784887 +0000 UTC m=+22.569497041" observedRunningTime="2025-12-02 16:17:09.76529298 +0000 UTC m=+23.201005152" watchObservedRunningTime="2025-12-02 16:17:09.765598754 +0000 UTC m=+23.201310927"
	Dec 02 16:17:12 old-k8s-version-380588 kubelet[733]: I1202 16:17:12.020464     733 scope.go:117] "RemoveContainer" containerID="0288738cc8f7e5cbdd45b38523d3baad90888917cf8b2f4a56299f3138f1402b"
	Dec 02 16:17:12 old-k8s-version-380588 kubelet[733]: E1202 16:17:12.020836     733 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-ftcrw_kubernetes-dashboard(8ad75430-6092-4b71-92ab-1041a127ac88)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ftcrw" podUID="8ad75430-6092-4b71-92ab-1041a127ac88"
	Dec 02 16:17:27 old-k8s-version-380588 kubelet[733]: I1202 16:17:27.663904     733 scope.go:117] "RemoveContainer" containerID="0288738cc8f7e5cbdd45b38523d3baad90888917cf8b2f4a56299f3138f1402b"
	Dec 02 16:17:27 old-k8s-version-380588 kubelet[733]: I1202 16:17:27.800850     733 scope.go:117] "RemoveContainer" containerID="0288738cc8f7e5cbdd45b38523d3baad90888917cf8b2f4a56299f3138f1402b"
	Dec 02 16:17:27 old-k8s-version-380588 kubelet[733]: I1202 16:17:27.801092     733 scope.go:117] "RemoveContainer" containerID="5fe90db48aae47a0930bb877fdf9445c413630f895e40b0fe3908389dd557346"
	Dec 02 16:17:27 old-k8s-version-380588 kubelet[733]: E1202 16:17:27.804041     733 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-ftcrw_kubernetes-dashboard(8ad75430-6092-4b71-92ab-1041a127ac88)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ftcrw" podUID="8ad75430-6092-4b71-92ab-1041a127ac88"
	Dec 02 16:17:32 old-k8s-version-380588 kubelet[733]: I1202 16:17:32.019785     733 scope.go:117] "RemoveContainer" containerID="5fe90db48aae47a0930bb877fdf9445c413630f895e40b0fe3908389dd557346"
	Dec 02 16:17:32 old-k8s-version-380588 kubelet[733]: E1202 16:17:32.020258     733 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-ftcrw_kubernetes-dashboard(8ad75430-6092-4b71-92ab-1041a127ac88)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ftcrw" podUID="8ad75430-6092-4b71-92ab-1041a127ac88"
	Dec 02 16:17:43 old-k8s-version-380588 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 02 16:17:43 old-k8s-version-380588 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 02 16:17:43 old-k8s-version-380588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 16:17:43 old-k8s-version-380588 systemd[1]: kubelet.service: Consumed 1.693s CPU time.
	
	
	==> kubernetes-dashboard [d3b98e3ce5ef5315549e5e82069de566f733711b1c003f5dcf7e0fd0f2108a47] <==
	2025/12/02 16:17:09 Using namespace: kubernetes-dashboard
	2025/12/02 16:17:09 Using in-cluster config to connect to apiserver
	2025/12/02 16:17:09 Using secret token for csrf signing
	2025/12/02 16:17:09 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/02 16:17:09 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/02 16:17:09 Successful initial request to the apiserver, version: v1.28.0
	2025/12/02 16:17:09 Generating JWE encryption key
	2025/12/02 16:17:09 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/02 16:17:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/02 16:17:09 Initializing JWE encryption key from synchronized object
	2025/12/02 16:17:09 Creating in-cluster Sidecar client
	2025/12/02 16:17:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/02 16:17:09 Serving insecurely on HTTP port: 9090
	2025/12/02 16:17:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/02 16:17:09 Starting overwatch
	
	
	==> storage-provisioner [a8f293ec5a85a4629b5301ed6f052814c79479439f97486c750e2d8f5e2ec1f5] <==
	I1202 16:16:50.055520       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1202 16:16:50.059507       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [c325d39d2a5d69fe2b31e92e4f9788a06cbe591e8f5b9a834b9dab65b20c1ac8] <==
	I1202 16:16:50.739810       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1202 16:16:50.748320       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1202 16:16:50.748366       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1202 16:17:08.146371       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1202 16:17:08.146595       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-380588_6b6485f8-35d5-4f54-b39c-cbe40277c4ae!
	I1202 16:17:08.146754       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fc3102a2-5536-4fab-baaf-1e9e658904c7", APIVersion:"v1", ResourceVersion:"580", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-380588_6b6485f8-35d5-4f54-b39c-cbe40277c4ae became leader
	I1202 16:17:08.246760       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-380588_6b6485f8-35d5-4f54-b39c-cbe40277c4ae!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-380588 -n old-k8s-version-380588
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-380588 -n old-k8s-version-380588: exit status 2 (384.093269ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-380588 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-534842 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-534842 --alsologtostderr -v=1: exit status 80 (2.413075923s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-534842 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 16:17:44.714679  620812 out.go:360] Setting OutFile to fd 1 ...
	I1202 16:17:44.714911  620812 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:17:44.714922  620812 out.go:374] Setting ErrFile to fd 2...
	I1202 16:17:44.714926  620812 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:17:44.715121  620812 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 16:17:44.715403  620812 out.go:368] Setting JSON to false
	I1202 16:17:44.715443  620812 mustload.go:66] Loading cluster: no-preload-534842
	I1202 16:17:44.716030  620812 config.go:182] Loaded profile config "no-preload-534842": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 16:17:44.716637  620812 cli_runner.go:164] Run: docker container inspect no-preload-534842 --format={{.State.Status}}
	I1202 16:17:44.738954  620812 host.go:66] Checking if "no-preload-534842" exists ...
	I1202 16:17:44.739324  620812 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 16:17:44.805886  620812 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-12-02 16:17:44.795220566 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 16:17:44.806661  620812 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-534842 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1202 16:17:44.808298  620812 out.go:179] * Pausing node no-preload-534842 ... 
	I1202 16:17:44.810004  620812 host.go:66] Checking if "no-preload-534842" exists ...
	I1202 16:17:44.810281  620812 ssh_runner.go:195] Run: systemctl --version
	I1202 16:17:44.810329  620812 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-534842
	I1202 16:17:44.832046  620812 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33245 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/no-preload-534842/id_rsa Username:docker}
	I1202 16:17:44.937412  620812 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 16:17:44.951591  620812 pause.go:52] kubelet running: true
	I1202 16:17:44.951671  620812 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 16:17:45.121108  620812 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 16:17:45.121215  620812 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 16:17:45.198637  620812 cri.go:89] found id: "784e9d934927898b20c9e43c22133906438a1575abb416ae016ebfe0b2444f19"
	I1202 16:17:45.198681  620812 cri.go:89] found id: "cf43888cedff5c122573841043f9faaa886459652a505ba34085fc2cdb3a7d64"
	I1202 16:17:45.198686  620812 cri.go:89] found id: "d11384487d38dcb6fc74940486755eb9bd08fc8a3d4b5841e9a6d5f50afe8f69"
	I1202 16:17:45.198690  620812 cri.go:89] found id: "4c8eb7538dccf291c0dade54352e7e1daff8f787ed7c19748a63f7a9d724cc04"
	I1202 16:17:45.198693  620812 cri.go:89] found id: "ce137b34f41fe8fd3b9b895d8913ee21b506dd0abb93c65e3d35f67ee4dbad78"
	I1202 16:17:45.198701  620812 cri.go:89] found id: "ef4d71f3dba7f249c2dccfb9492705acceca27d92b988ad3f3be8ddf967a2524"
	I1202 16:17:45.198704  620812 cri.go:89] found id: "7f5c2cae2aa291edcbbe0f927b622ca7853d0323468ef1d4662a47fc47dab2a7"
	I1202 16:17:45.198707  620812 cri.go:89] found id: "44a6ec8649ccbb15298488aba888279a5c30ed43f97b8e65953b50f4199a5f54"
	I1202 16:17:45.198712  620812 cri.go:89] found id: "ec6d57760ee61c8da2007c23b76750466cdaa245ef7a003ac8ccc74510f7bd2e"
	I1202 16:17:45.198743  620812 cri.go:89] found id: "678df9e701579d2c9bec8e97da27418a40aef0f064af7456732c5cf2c76aafa6"
	I1202 16:17:45.198752  620812 cri.go:89] found id: "4acc4581c23774d9b9ae826d1cebbf7a4ab0f3eb613cad13a717ce4d3ceb6947"
	I1202 16:17:45.198756  620812 cri.go:89] found id: ""
	I1202 16:17:45.198817  620812 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 16:17:45.212333  620812 retry.go:31] will retry after 160.097634ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T16:17:45Z" level=error msg="open /run/runc: no such file or directory"
	I1202 16:17:45.373619  620812 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 16:17:45.389181  620812 pause.go:52] kubelet running: false
	I1202 16:17:45.389247  620812 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 16:17:45.540506  620812 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 16:17:45.540618  620812 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 16:17:45.616142  620812 cri.go:89] found id: "784e9d934927898b20c9e43c22133906438a1575abb416ae016ebfe0b2444f19"
	I1202 16:17:45.616167  620812 cri.go:89] found id: "cf43888cedff5c122573841043f9faaa886459652a505ba34085fc2cdb3a7d64"
	I1202 16:17:45.616179  620812 cri.go:89] found id: "d11384487d38dcb6fc74940486755eb9bd08fc8a3d4b5841e9a6d5f50afe8f69"
	I1202 16:17:45.616183  620812 cri.go:89] found id: "4c8eb7538dccf291c0dade54352e7e1daff8f787ed7c19748a63f7a9d724cc04"
	I1202 16:17:45.616186  620812 cri.go:89] found id: "ce137b34f41fe8fd3b9b895d8913ee21b506dd0abb93c65e3d35f67ee4dbad78"
	I1202 16:17:45.616190  620812 cri.go:89] found id: "ef4d71f3dba7f249c2dccfb9492705acceca27d92b988ad3f3be8ddf967a2524"
	I1202 16:17:45.616193  620812 cri.go:89] found id: "7f5c2cae2aa291edcbbe0f927b622ca7853d0323468ef1d4662a47fc47dab2a7"
	I1202 16:17:45.616196  620812 cri.go:89] found id: "44a6ec8649ccbb15298488aba888279a5c30ed43f97b8e65953b50f4199a5f54"
	I1202 16:17:45.616198  620812 cri.go:89] found id: "ec6d57760ee61c8da2007c23b76750466cdaa245ef7a003ac8ccc74510f7bd2e"
	I1202 16:17:45.616204  620812 cri.go:89] found id: "678df9e701579d2c9bec8e97da27418a40aef0f064af7456732c5cf2c76aafa6"
	I1202 16:17:45.616207  620812 cri.go:89] found id: "4acc4581c23774d9b9ae826d1cebbf7a4ab0f3eb613cad13a717ce4d3ceb6947"
	I1202 16:17:45.616210  620812 cri.go:89] found id: ""
	I1202 16:17:45.616247  620812 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 16:17:45.629896  620812 retry.go:31] will retry after 515.538276ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T16:17:45Z" level=error msg="open /run/runc: no such file or directory"
	I1202 16:17:46.145604  620812 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 16:17:46.159601  620812 pause.go:52] kubelet running: false
	I1202 16:17:46.159666  620812 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 16:17:46.310377  620812 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 16:17:46.310474  620812 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 16:17:46.384934  620812 cri.go:89] found id: "784e9d934927898b20c9e43c22133906438a1575abb416ae016ebfe0b2444f19"
	I1202 16:17:46.384989  620812 cri.go:89] found id: "cf43888cedff5c122573841043f9faaa886459652a505ba34085fc2cdb3a7d64"
	I1202 16:17:46.384996  620812 cri.go:89] found id: "d11384487d38dcb6fc74940486755eb9bd08fc8a3d4b5841e9a6d5f50afe8f69"
	I1202 16:17:46.385001  620812 cri.go:89] found id: "4c8eb7538dccf291c0dade54352e7e1daff8f787ed7c19748a63f7a9d724cc04"
	I1202 16:17:46.385005  620812 cri.go:89] found id: "ce137b34f41fe8fd3b9b895d8913ee21b506dd0abb93c65e3d35f67ee4dbad78"
	I1202 16:17:46.385011  620812 cri.go:89] found id: "ef4d71f3dba7f249c2dccfb9492705acceca27d92b988ad3f3be8ddf967a2524"
	I1202 16:17:46.385015  620812 cri.go:89] found id: "7f5c2cae2aa291edcbbe0f927b622ca7853d0323468ef1d4662a47fc47dab2a7"
	I1202 16:17:46.385020  620812 cri.go:89] found id: "44a6ec8649ccbb15298488aba888279a5c30ed43f97b8e65953b50f4199a5f54"
	I1202 16:17:46.385025  620812 cri.go:89] found id: "ec6d57760ee61c8da2007c23b76750466cdaa245ef7a003ac8ccc74510f7bd2e"
	I1202 16:17:46.385033  620812 cri.go:89] found id: "678df9e701579d2c9bec8e97da27418a40aef0f064af7456732c5cf2c76aafa6"
	I1202 16:17:46.385038  620812 cri.go:89] found id: "4acc4581c23774d9b9ae826d1cebbf7a4ab0f3eb613cad13a717ce4d3ceb6947"
	I1202 16:17:46.385043  620812 cri.go:89] found id: ""
	I1202 16:17:46.385089  620812 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 16:17:46.397818  620812 retry.go:31] will retry after 344.889387ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T16:17:46Z" level=error msg="open /run/runc: no such file or directory"
	I1202 16:17:46.743447  620812 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 16:17:46.757810  620812 pause.go:52] kubelet running: false
	I1202 16:17:46.757862  620812 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 16:17:46.963062  620812 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 16:17:46.963161  620812 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 16:17:47.038830  620812 cri.go:89] found id: "784e9d934927898b20c9e43c22133906438a1575abb416ae016ebfe0b2444f19"
	I1202 16:17:47.038875  620812 cri.go:89] found id: "cf43888cedff5c122573841043f9faaa886459652a505ba34085fc2cdb3a7d64"
	I1202 16:17:47.038880  620812 cri.go:89] found id: "d11384487d38dcb6fc74940486755eb9bd08fc8a3d4b5841e9a6d5f50afe8f69"
	I1202 16:17:47.038884  620812 cri.go:89] found id: "4c8eb7538dccf291c0dade54352e7e1daff8f787ed7c19748a63f7a9d724cc04"
	I1202 16:17:47.038888  620812 cri.go:89] found id: "ce137b34f41fe8fd3b9b895d8913ee21b506dd0abb93c65e3d35f67ee4dbad78"
	I1202 16:17:47.038892  620812 cri.go:89] found id: "ef4d71f3dba7f249c2dccfb9492705acceca27d92b988ad3f3be8ddf967a2524"
	I1202 16:17:47.038896  620812 cri.go:89] found id: "7f5c2cae2aa291edcbbe0f927b622ca7853d0323468ef1d4662a47fc47dab2a7"
	I1202 16:17:47.038899  620812 cri.go:89] found id: "44a6ec8649ccbb15298488aba888279a5c30ed43f97b8e65953b50f4199a5f54"
	I1202 16:17:47.038902  620812 cri.go:89] found id: "ec6d57760ee61c8da2007c23b76750466cdaa245ef7a003ac8ccc74510f7bd2e"
	I1202 16:17:47.038909  620812 cri.go:89] found id: "678df9e701579d2c9bec8e97da27418a40aef0f064af7456732c5cf2c76aafa6"
	I1202 16:17:47.038913  620812 cri.go:89] found id: "4acc4581c23774d9b9ae826d1cebbf7a4ab0f3eb613cad13a717ce4d3ceb6947"
	I1202 16:17:47.038917  620812 cri.go:89] found id: ""
	I1202 16:17:47.038968  620812 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 16:17:47.053930  620812 out.go:203] 
	W1202 16:17:47.055002  620812 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T16:17:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T16:17:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 16:17:47.055025  620812 out.go:285] * 
	* 
	W1202 16:17:47.059585  620812 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 16:17:47.061118  620812 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-534842 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-534842
helpers_test.go:243: (dbg) docker inspect no-preload-534842:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a2904e47fdbbff69e8d2d0c47a8f31f01acd643bd1e30ffb705ef9b28bc00aaa",
	        "Created": "2025-12-02T16:15:33.245538199Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 609846,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T16:16:48.793012546Z",
	            "FinishedAt": "2025-12-02T16:16:47.74267679Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/a2904e47fdbbff69e8d2d0c47a8f31f01acd643bd1e30ffb705ef9b28bc00aaa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a2904e47fdbbff69e8d2d0c47a8f31f01acd643bd1e30ffb705ef9b28bc00aaa/hostname",
	        "HostsPath": "/var/lib/docker/containers/a2904e47fdbbff69e8d2d0c47a8f31f01acd643bd1e30ffb705ef9b28bc00aaa/hosts",
	        "LogPath": "/var/lib/docker/containers/a2904e47fdbbff69e8d2d0c47a8f31f01acd643bd1e30ffb705ef9b28bc00aaa/a2904e47fdbbff69e8d2d0c47a8f31f01acd643bd1e30ffb705ef9b28bc00aaa-json.log",
	        "Name": "/no-preload-534842",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-534842:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-534842",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a2904e47fdbbff69e8d2d0c47a8f31f01acd643bd1e30ffb705ef9b28bc00aaa",
	                "LowerDir": "/var/lib/docker/overlay2/6a4b4e52df8b38f90b870d8719f5a1cc2a12c6d10fe8621038d418792a62b0c1-init/diff:/var/lib/docker/overlay2/ab98578cee54140c21ba2edb7c02601b9799fbaa027f05ce4daaae66d198c082/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6a4b4e52df8b38f90b870d8719f5a1cc2a12c6d10fe8621038d418792a62b0c1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6a4b4e52df8b38f90b870d8719f5a1cc2a12c6d10fe8621038d418792a62b0c1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6a4b4e52df8b38f90b870d8719f5a1cc2a12c6d10fe8621038d418792a62b0c1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-534842",
	                "Source": "/var/lib/docker/volumes/no-preload-534842/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-534842",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-534842",
	                "name.minikube.sigs.k8s.io": "no-preload-534842",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "dc31dfe595381e2429266146ec958d8a21cf5e38069c217b35d005683f1c1f94",
	            "SandboxKey": "/var/run/docker/netns/dc31dfe59538",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33245"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33246"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33249"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33247"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33248"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-534842": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "26f54f8ab80db170a83b2dc1c670501109df1b38f3efe9f0b57bf1b09b594ad5",
	                    "EndpointID": "8caede9995c860fc48df33243ff694d1ec3b8ce94fb92d1712c89b57c70691d4",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "4e:c4:d6:d7:b7:09",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-534842",
	                        "a2904e47fdbb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-534842 -n no-preload-534842
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-534842 -n no-preload-534842: exit status 2 (384.254801ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-534842 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-534842 logs -n 25: (1.199296683s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-589300 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ ssh     │ -p bridge-589300 sudo crio config                                                                                                                                                                                                             │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ delete  │ -p bridge-589300                                                                                                                                                                                                                              │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ delete  │ -p disable-driver-mounts-904481                                                                                                                                                                                                               │ disable-driver-mounts-904481 │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ start   │ -p default-k8s-diff-port-806420 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-380588 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	│ stop    │ -p old-k8s-version-380588 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ addons  │ enable metrics-server -p no-preload-534842 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	│ stop    │ -p no-preload-534842 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-380588 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ start   │ -p old-k8s-version-380588 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:17 UTC │
	│ addons  │ enable dashboard -p no-preload-534842 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ start   │ -p no-preload-534842 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:17 UTC │
	│ addons  │ enable metrics-server -p embed-certs-046271 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	│ stop    │ -p embed-certs-046271 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:17 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-806420 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-806420 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-046271 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ start   │ -p embed-certs-046271 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-806420 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ start   │ -p default-k8s-diff-port-806420 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │                     │
	│ image   │ old-k8s-version-380588 image list --format=json                                                                                                                                                                                               │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ pause   │ -p old-k8s-version-380588 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │                     │
	│ image   │ no-preload-534842 image list --format=json                                                                                                                                                                                                    │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ pause   │ -p no-preload-534842 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 16:17:22
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 16:17:22.498316  617021 out.go:360] Setting OutFile to fd 1 ...
	I1202 16:17:22.498682  617021 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:17:22.498698  617021 out.go:374] Setting ErrFile to fd 2...
	I1202 16:17:22.498706  617021 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:17:22.499020  617021 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 16:17:22.499708  617021 out.go:368] Setting JSON to false
	I1202 16:17:22.501327  617021 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":10783,"bootTime":1764681459,"procs":363,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 16:17:22.501399  617021 start.go:143] virtualization: kvm guest
	I1202 16:17:22.505282  617021 out.go:179] * [default-k8s-diff-port-806420] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 16:17:22.506595  617021 notify.go:221] Checking for updates...
	I1202 16:17:22.506646  617021 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 16:17:22.507981  617021 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 16:17:22.509145  617021 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 16:17:22.510227  617021 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-264555/.minikube
	I1202 16:17:22.511263  617021 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 16:17:22.512202  617021 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 16:17:22.513803  617021 config.go:182] Loaded profile config "default-k8s-diff-port-806420": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 16:17:22.514580  617021 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 16:17:22.546450  617021 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 16:17:22.546572  617021 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 16:17:22.614629  617021 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-02 16:17:22.602669456 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 16:17:22.614775  617021 docker.go:319] overlay module found
	I1202 16:17:22.616372  617021 out.go:179] * Using the docker driver based on existing profile
	I1202 16:17:20.554206  615191 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1202 16:17:20.554226  615191 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1202 16:17:20.554286  615191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-046271
	I1202 16:17:20.578798  615191 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 16:17:20.578835  615191 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 16:17:20.578900  615191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-046271
	I1202 16:17:20.590547  615191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33250 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/embed-certs-046271/id_rsa Username:docker}
	I1202 16:17:20.597866  615191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33250 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/embed-certs-046271/id_rsa Username:docker}
	I1202 16:17:20.608006  615191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33250 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/embed-certs-046271/id_rsa Username:docker}
	I1202 16:17:20.696829  615191 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 16:17:20.711938  615191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 16:17:20.715717  615191 node_ready.go:35] waiting up to 6m0s for node "embed-certs-046271" to be "Ready" ...
	I1202 16:17:20.724206  615191 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1202 16:17:20.724236  615191 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1202 16:17:20.733876  615191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 16:17:20.741340  615191 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1202 16:17:20.741367  615191 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1202 16:17:20.760344  615191 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1202 16:17:20.760372  615191 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1202 16:17:20.777477  615191 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1202 16:17:20.777507  615191 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1202 16:17:20.794322  615191 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1202 16:17:20.794352  615191 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1202 16:17:20.812771  615191 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1202 16:17:20.812806  615191 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1202 16:17:20.827575  615191 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1202 16:17:20.827606  615191 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1202 16:17:20.843608  615191 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1202 16:17:20.843637  615191 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1202 16:17:20.858834  615191 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 16:17:20.858862  615191 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1202 16:17:20.877363  615191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 16:17:22.050597  615191 node_ready.go:49] node "embed-certs-046271" is "Ready"
	I1202 16:17:22.050643  615191 node_ready.go:38] duration metric: took 1.334887125s for node "embed-certs-046271" to be "Ready" ...
	I1202 16:17:22.050670  615191 api_server.go:52] waiting for apiserver process to appear ...
	I1202 16:17:22.050729  615191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 16:17:22.687464  615191 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.975454995s)
	I1202 16:17:22.687522  615191 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.953605693s)
	I1202 16:17:22.687655  615191 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.810242956s)
	I1202 16:17:22.687712  615191 api_server.go:72] duration metric: took 2.165624029s to wait for apiserver process to appear ...
	I1202 16:17:22.617494  617021 start.go:309] selected driver: docker
	I1202 16:17:22.617510  617021 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-806420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-806420 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 16:17:22.617607  617021 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 16:17:22.618289  617021 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 16:17:22.687951  617021 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-02 16:17:22.676818567 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 16:17:22.688331  617021 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 16:17:22.688378  617021 cni.go:84] Creating CNI manager for ""
	I1202 16:17:22.688459  617021 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 16:17:22.688539  617021 start.go:353] cluster config:
	{Name:default-k8s-diff-port-806420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-806420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 16:17:22.687737  615191 api_server.go:88] waiting for apiserver healthz status ...
	I1202 16:17:22.687841  615191 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1202 16:17:22.689323  615191 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-046271 addons enable metrics-server
	
	I1202 16:17:22.690518  617021 out.go:179] * Starting "default-k8s-diff-port-806420" primary control-plane node in "default-k8s-diff-port-806420" cluster
	I1202 16:17:22.691442  617021 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 16:17:22.692381  617021 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 16:17:22.696323  615191 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 16:17:22.696349  615191 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 16:17:22.701692  615191 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1202 16:17:22.693673  617021 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 16:17:22.693741  617021 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 16:17:22.693782  617021 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22021-264555/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1202 16:17:22.693799  617021 cache.go:65] Caching tarball of preloaded images
	I1202 16:17:22.693901  617021 preload.go:238] Found /home/jenkins/minikube-integration/22021-264555/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 16:17:22.693915  617021 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 16:17:22.694040  617021 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/config.json ...
	I1202 16:17:22.717168  617021 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 16:17:22.717185  617021 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 16:17:22.717204  617021 cache.go:243] Successfully downloaded all kic artifacts
	I1202 16:17:22.717240  617021 start.go:360] acquireMachinesLock for default-k8s-diff-port-806420: {Name:mk8a961b68c6bbf9b1910f8ae43c90e49f86c0f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:17:22.717306  617021 start.go:364] duration metric: took 43.2µs to acquireMachinesLock for "default-k8s-diff-port-806420"
	I1202 16:17:22.717329  617021 start.go:96] Skipping create...Using existing machine configuration
	I1202 16:17:22.717337  617021 fix.go:54] fixHost starting: 
	I1202 16:17:22.717575  617021 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-806420 --format={{.State.Status}}
	I1202 16:17:22.736168  617021 fix.go:112] recreateIfNeeded on default-k8s-diff-port-806420: state=Stopped err=<nil>
	W1202 16:17:22.736197  617021 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 16:17:22.702818  615191 addons.go:530] duration metric: took 2.180728191s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1202 16:17:23.187965  615191 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1202 16:17:23.202226  615191 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 16:17:23.202260  615191 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 16:17:19.307997  609654 pod_ready.go:104] pod "coredns-7d764666f9-fxl4s" is not "Ready", error: <nil>
	W1202 16:17:21.806201  609654 pod_ready.go:104] pod "coredns-7d764666f9-fxl4s" is not "Ready", error: <nil>
	W1202 16:17:20.509898  607516 pod_ready.go:104] pod "coredns-5dd5756b68-fsfh2" is not "Ready", error: <nil>
	W1202 16:17:22.511187  607516 pod_ready.go:104] pod "coredns-5dd5756b68-fsfh2" is not "Ready", error: <nil>
	W1202 16:17:25.009769  607516 pod_ready.go:104] pod "coredns-5dd5756b68-fsfh2" is not "Ready", error: <nil>
	I1202 16:17:22.738049  617021 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-806420" ...
	I1202 16:17:22.738131  617021 cli_runner.go:164] Run: docker start default-k8s-diff-port-806420
	I1202 16:17:23.056389  617021 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-806420 --format={{.State.Status}}
	I1202 16:17:23.080845  617021 kic.go:430] container "default-k8s-diff-port-806420" state is running.
	I1202 16:17:23.081352  617021 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-806420
	I1202 16:17:23.104364  617021 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/config.json ...
	I1202 16:17:23.104731  617021 machine.go:94] provisionDockerMachine start ...
	I1202 16:17:23.104810  617021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:17:23.132129  617021 main.go:143] libmachine: Using SSH client type: native
	I1202 16:17:23.132593  617021 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33255 <nil> <nil>}
	I1202 16:17:23.132615  617021 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 16:17:23.133560  617021 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44490->127.0.0.1:33255: read: connection reset by peer
	I1202 16:17:26.278234  617021 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-806420
	
	I1202 16:17:26.278279  617021 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-806420"
	I1202 16:17:26.278370  617021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:17:26.298722  617021 main.go:143] libmachine: Using SSH client type: native
	I1202 16:17:26.298946  617021 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33255 <nil> <nil>}
	I1202 16:17:26.298961  617021 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-806420 && echo "default-k8s-diff-port-806420" | sudo tee /etc/hostname
	I1202 16:17:26.455925  617021 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-806420
	
	I1202 16:17:26.456010  617021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:17:26.475742  617021 main.go:143] libmachine: Using SSH client type: native
	I1202 16:17:26.476020  617021 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33255 <nil> <nil>}
	I1202 16:17:26.476041  617021 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-806420' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-806420/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-806420' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 16:17:26.621706  617021 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 16:17:26.621744  617021 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-264555/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-264555/.minikube}
	I1202 16:17:26.621776  617021 ubuntu.go:190] setting up certificates
	I1202 16:17:26.621791  617021 provision.go:84] configureAuth start
	I1202 16:17:26.621871  617021 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-806420
	I1202 16:17:26.646855  617021 provision.go:143] copyHostCerts
	I1202 16:17:26.646932  617021 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem, removing ...
	I1202 16:17:26.646949  617021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem
	I1202 16:17:26.647023  617021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem (1123 bytes)
	I1202 16:17:26.647146  617021 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem, removing ...
	I1202 16:17:26.647160  617021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem
	I1202 16:17:26.647202  617021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem (1675 bytes)
	I1202 16:17:26.647293  617021 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem, removing ...
	I1202 16:17:26.647305  617021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem
	I1202 16:17:26.647345  617021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem (1082 bytes)
	I1202 16:17:26.647443  617021 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-806420 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-806420 localhost minikube]
	I1202 16:17:26.754337  617021 provision.go:177] copyRemoteCerts
	I1202 16:17:26.754415  617021 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 16:17:26.754477  617021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:17:26.777385  617021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/default-k8s-diff-port-806420/id_rsa Username:docker}
	I1202 16:17:26.893005  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1202 16:17:26.918128  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 16:17:26.944489  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 16:17:26.970311  617021 provision.go:87] duration metric: took 348.497825ms to configureAuth
	I1202 16:17:26.970349  617021 ubuntu.go:206] setting minikube options for container-runtime
	I1202 16:17:26.970597  617021 config.go:182] Loaded profile config "default-k8s-diff-port-806420": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 16:17:26.970740  617021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:17:26.995213  617021 main.go:143] libmachine: Using SSH client type: native
	I1202 16:17:26.995551  617021 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33255 <nil> <nil>}
	I1202 16:17:26.995581  617021 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 16:17:23.688681  615191 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1202 16:17:23.693093  615191 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1202 16:17:23.694079  615191 api_server.go:141] control plane version: v1.34.2
	I1202 16:17:23.694104  615191 api_server.go:131] duration metric: took 1.006283162s to wait for apiserver health ...
	I1202 16:17:23.694113  615191 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 16:17:23.697817  615191 system_pods.go:59] 8 kube-system pods found
	I1202 16:17:23.697855  615191 system_pods.go:61] "coredns-66bc5c9577-f2vhx" [364e193c-f53a-4a43-b365-fe8364c3bd0f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 16:17:23.697865  615191 system_pods.go:61] "etcd-embed-certs-046271" [5b715b6b-8154-4ca8-9dc1-795be52cb8b2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 16:17:23.697876  615191 system_pods.go:61] "kindnet-wpj6k" [9249e8d2-e10c-4cae-bf04-cbf331109cf5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1202 16:17:23.697883  615191 system_pods.go:61] "kube-apiserver-embed-certs-046271" [f87f3619-f513-463f-bb69-acf168ec4ed0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 16:17:23.697892  615191 system_pods.go:61] "kube-controller-manager-embed-certs-046271" [bbdde76a-6098-496b-aaeb-2d61a714017a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 16:17:23.697899  615191 system_pods.go:61] "kube-proxy-q9pxb" [85574988-c836-4351-80bf-92683e782d91] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1202 16:17:23.697905  615191 system_pods.go:61] "kube-scheduler-embed-certs-046271" [d3b40c19-3363-443d-93f9-d2789b47d291] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 16:17:23.697910  615191 system_pods.go:61] "storage-provisioner" [5a625bd8-b8b8-4abc-b86a-d39218c7ffe3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 16:17:23.697918  615191 system_pods.go:74] duration metric: took 3.801084ms to wait for pod list to return data ...
	I1202 16:17:23.697926  615191 default_sa.go:34] waiting for default service account to be created ...
	I1202 16:17:23.700382  615191 default_sa.go:45] found service account: "default"
	I1202 16:17:23.700399  615191 default_sa.go:55] duration metric: took 2.466186ms for default service account to be created ...
	I1202 16:17:23.700407  615191 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 16:17:23.703139  615191 system_pods.go:86] 8 kube-system pods found
	I1202 16:17:23.703167  615191 system_pods.go:89] "coredns-66bc5c9577-f2vhx" [364e193c-f53a-4a43-b365-fe8364c3bd0f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 16:17:23.703178  615191 system_pods.go:89] "etcd-embed-certs-046271" [5b715b6b-8154-4ca8-9dc1-795be52cb8b2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 16:17:23.703189  615191 system_pods.go:89] "kindnet-wpj6k" [9249e8d2-e10c-4cae-bf04-cbf331109cf5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1202 16:17:23.703199  615191 system_pods.go:89] "kube-apiserver-embed-certs-046271" [f87f3619-f513-463f-bb69-acf168ec4ed0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 16:17:23.703214  615191 system_pods.go:89] "kube-controller-manager-embed-certs-046271" [bbdde76a-6098-496b-aaeb-2d61a714017a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 16:17:23.703227  615191 system_pods.go:89] "kube-proxy-q9pxb" [85574988-c836-4351-80bf-92683e782d91] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1202 16:17:23.703256  615191 system_pods.go:89] "kube-scheduler-embed-certs-046271" [d3b40c19-3363-443d-93f9-d2789b47d291] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 16:17:23.703268  615191 system_pods.go:89] "storage-provisioner" [5a625bd8-b8b8-4abc-b86a-d39218c7ffe3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 16:17:23.703278  615191 system_pods.go:126] duration metric: took 2.864031ms to wait for k8s-apps to be running ...
	I1202 16:17:23.703288  615191 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 16:17:23.703342  615191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 16:17:23.717127  615191 system_svc.go:56] duration metric: took 13.83377ms WaitForService to wait for kubelet
	I1202 16:17:23.717156  615191 kubeadm.go:587] duration metric: took 3.195075641s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 16:17:23.717179  615191 node_conditions.go:102] verifying NodePressure condition ...
	I1202 16:17:23.720108  615191 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 16:17:23.720130  615191 node_conditions.go:123] node cpu capacity is 8
	I1202 16:17:23.720143  615191 node_conditions.go:105] duration metric: took 2.959591ms to run NodePressure ...
	I1202 16:17:23.720159  615191 start.go:242] waiting for startup goroutines ...
	I1202 16:17:23.720169  615191 start.go:247] waiting for cluster config update ...
	I1202 16:17:23.720186  615191 start.go:256] writing updated cluster config ...
	I1202 16:17:23.720469  615191 ssh_runner.go:195] Run: rm -f paused
	I1202 16:17:23.724393  615191 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 16:17:23.728063  615191 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-f2vhx" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 16:17:25.734503  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	W1202 16:17:27.735550  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	W1202 16:17:23.807143  609654 pod_ready.go:104] pod "coredns-7d764666f9-fxl4s" is not "Ready", error: <nil>
	W1202 16:17:26.306569  609654 pod_ready.go:104] pod "coredns-7d764666f9-fxl4s" is not "Ready", error: <nil>
	W1202 16:17:28.307617  609654 pod_ready.go:104] pod "coredns-7d764666f9-fxl4s" is not "Ready", error: <nil>
	I1202 16:17:27.600994  617021 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 16:17:27.601027  617021 machine.go:97] duration metric: took 4.496275002s to provisionDockerMachine
	I1202 16:17:27.601043  617021 start.go:293] postStartSetup for "default-k8s-diff-port-806420" (driver="docker")
	I1202 16:17:27.601058  617021 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 16:17:27.601128  617021 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 16:17:27.601178  617021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:17:27.623246  617021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/default-k8s-diff-port-806420/id_rsa Username:docker}
	I1202 16:17:27.730663  617021 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 16:17:27.735877  617021 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 16:17:27.735907  617021 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 16:17:27.735918  617021 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-264555/.minikube/addons for local assets ...
	I1202 16:17:27.735966  617021 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-264555/.minikube/files for local assets ...
	I1202 16:17:27.736035  617021 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem -> 2680992.pem in /etc/ssl/certs
	I1202 16:17:27.736120  617021 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 16:17:27.745825  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem --> /etc/ssl/certs/2680992.pem (1708 bytes)
	I1202 16:17:27.768713  617021 start.go:296] duration metric: took 167.65018ms for postStartSetup
	I1202 16:17:27.768803  617021 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 16:17:27.768855  617021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:17:27.789992  617021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/default-k8s-diff-port-806420/id_rsa Username:docker}
	I1202 16:17:27.900148  617021 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 16:17:27.906371  617021 fix.go:56] duration metric: took 5.18902239s for fixHost
	I1202 16:17:27.906403  617021 start.go:83] releasing machines lock for "default-k8s-diff-port-806420", held for 5.189082645s
	I1202 16:17:27.906507  617021 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-806420
	I1202 16:17:27.929346  617021 ssh_runner.go:195] Run: cat /version.json
	I1202 16:17:27.929406  617021 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 16:17:27.929409  617021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:17:27.929492  617021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:17:27.952635  617021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/default-k8s-diff-port-806420/id_rsa Username:docker}
	I1202 16:17:27.954515  617021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/default-k8s-diff-port-806420/id_rsa Username:docker}
	I1202 16:17:28.138245  617021 ssh_runner.go:195] Run: systemctl --version
	I1202 16:17:28.147344  617021 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 16:17:28.198225  617021 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 16:17:28.204870  617021 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 16:17:28.204948  617021 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 16:17:28.216111  617021 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 16:17:28.216139  617021 start.go:496] detecting cgroup driver to use...
	I1202 16:17:28.216177  617021 detect.go:190] detected "systemd" cgroup driver on host os
	I1202 16:17:28.216233  617021 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 16:17:28.236312  617021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 16:17:28.253597  617021 docker.go:218] disabling cri-docker service (if available) ...
	I1202 16:17:28.253663  617021 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 16:17:28.274789  617021 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 16:17:28.292789  617021 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 16:17:28.400578  617021 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 16:17:28.502622  617021 docker.go:234] disabling docker service ...
	I1202 16:17:28.502709  617021 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 16:17:28.519863  617021 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 16:17:28.534627  617021 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 16:17:28.622884  617021 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 16:17:28.715766  617021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 16:17:28.728514  617021 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 16:17:28.743515  617021 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 16:17:28.743589  617021 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:28.752513  617021 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1202 16:17:28.752573  617021 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:28.761803  617021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:28.770820  617021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:28.779678  617021 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 16:17:28.788772  617021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:28.799817  617021 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:28.812207  617021 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:28.822959  617021 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 16:17:28.830615  617021 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 16:17:28.839315  617021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 16:17:28.935291  617021 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 16:17:29.312918  617021 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 16:17:29.312980  617021 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 16:17:29.316948  617021 start.go:564] Will wait 60s for crictl version
	I1202 16:17:29.316995  617021 ssh_runner.go:195] Run: which crictl
	I1202 16:17:29.320879  617021 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 16:17:29.346184  617021 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 16:17:29.346247  617021 ssh_runner.go:195] Run: crio --version
	I1202 16:17:29.374009  617021 ssh_runner.go:195] Run: crio --version
	I1202 16:17:29.405802  617021 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	W1202 16:17:27.010483  607516 pod_ready.go:104] pod "coredns-5dd5756b68-fsfh2" is not "Ready", error: <nil>
	I1202 16:17:29.009809  607516 pod_ready.go:94] pod "coredns-5dd5756b68-fsfh2" is "Ready"
	I1202 16:17:29.009836  607516 pod_ready.go:86] duration metric: took 38.00631225s for pod "coredns-5dd5756b68-fsfh2" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:29.012870  607516 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-380588" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:29.017277  607516 pod_ready.go:94] pod "etcd-old-k8s-version-380588" is "Ready"
	I1202 16:17:29.017298  607516 pod_ready.go:86] duration metric: took 4.40606ms for pod "etcd-old-k8s-version-380588" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:29.019970  607516 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-380588" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:29.023996  607516 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-380588" is "Ready"
	I1202 16:17:29.024017  607516 pod_ready.go:86] duration metric: took 4.027937ms for pod "kube-apiserver-old-k8s-version-380588" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:29.026488  607516 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-380588" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:29.207471  607516 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-380588" is "Ready"
	I1202 16:17:29.207497  607516 pod_ready.go:86] duration metric: took 180.991786ms for pod "kube-controller-manager-old-k8s-version-380588" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:29.408298  607516 pod_ready.go:83] waiting for pod "kube-proxy-jqstm" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:29.809129  607516 pod_ready.go:94] pod "kube-proxy-jqstm" is "Ready"
	I1202 16:17:29.809162  607516 pod_ready.go:86] duration metric: took 400.836367ms for pod "kube-proxy-jqstm" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:30.009989  607516 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-380588" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:30.408957  607516 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-380588" is "Ready"
	I1202 16:17:30.409044  607516 pod_ready.go:86] duration metric: took 399.025835ms for pod "kube-scheduler-old-k8s-version-380588" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:30.409070  607516 pod_ready.go:40] duration metric: took 39.411732547s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 16:17:30.482562  607516 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1202 16:17:30.484303  607516 out.go:203] 
	W1202 16:17:30.485747  607516 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1202 16:17:30.486932  607516 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1202 16:17:30.488134  607516 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-380588" cluster and "default" namespace by default
	I1202 16:17:29.407098  617021 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-806420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 16:17:29.424770  617021 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1202 16:17:29.429550  617021 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 16:17:29.439999  617021 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-806420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-806420 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 16:17:29.440104  617021 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 16:17:29.440140  617021 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 16:17:29.471019  617021 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 16:17:29.471045  617021 crio.go:433] Images already preloaded, skipping extraction
	I1202 16:17:29.471102  617021 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 16:17:29.496542  617021 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 16:17:29.496569  617021 cache_images.go:86] Images are preloaded, skipping loading
	I1202 16:17:29.496578  617021 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.2 crio true true} ...
	I1202 16:17:29.496701  617021 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-806420 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-806420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 16:17:29.496786  617021 ssh_runner.go:195] Run: crio config
	I1202 16:17:29.541566  617021 cni.go:84] Creating CNI manager for ""
	I1202 16:17:29.541586  617021 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 16:17:29.541596  617021 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 16:17:29.541616  617021 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-806420 NodeName:default-k8s-diff-port-806420 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 16:17:29.541728  617021 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-806420"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 16:17:29.541789  617021 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 16:17:29.550029  617021 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 16:17:29.550090  617021 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 16:17:29.558054  617021 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1202 16:17:29.571441  617021 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 16:17:29.584227  617021 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1202 16:17:29.597282  617021 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1202 16:17:29.601067  617021 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 16:17:29.611632  617021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 16:17:29.694704  617021 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 16:17:29.718170  617021 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420 for IP: 192.168.85.2
	I1202 16:17:29.718196  617021 certs.go:195] generating shared ca certs ...
	I1202 16:17:29.718216  617021 certs.go:227] acquiring lock for ca certs: {Name:mk039ff27816ff98157f54038cc23b17e408fc34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:17:29.718396  617021 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-264555/.minikube/ca.key
	I1202 16:17:29.718471  617021 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.key
	I1202 16:17:29.718486  617021 certs.go:257] generating profile certs ...
	I1202 16:17:29.718602  617021 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/client.key
	I1202 16:17:29.718693  617021 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/apiserver.key.20cb4091
	I1202 16:17:29.718752  617021 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/proxy-client.key
	I1202 16:17:29.718896  617021 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099.pem (1338 bytes)
	W1202 16:17:29.718940  617021 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099_empty.pem, impossibly tiny 0 bytes
	I1202 16:17:29.718953  617021 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem (1679 bytes)
	I1202 16:17:29.718990  617021 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem (1082 bytes)
	I1202 16:17:29.719023  617021 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem (1123 bytes)
	I1202 16:17:29.719054  617021 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem (1675 bytes)
	I1202 16:17:29.719109  617021 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem (1708 bytes)
	I1202 16:17:29.719924  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 16:17:29.741007  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 16:17:29.761350  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 16:17:29.780876  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 16:17:29.804308  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1202 16:17:29.825901  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 16:17:29.848908  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 16:17:29.867865  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 16:17:29.888652  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem --> /usr/share/ca-certificates/2680992.pem (1708 bytes)
	I1202 16:17:29.910779  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 16:17:29.932582  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099.pem --> /usr/share/ca-certificates/268099.pem (1338 bytes)
	I1202 16:17:29.956561  617021 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 16:17:29.972696  617021 ssh_runner.go:195] Run: openssl version
	I1202 16:17:29.980524  617021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2680992.pem && ln -fs /usr/share/ca-certificates/2680992.pem /etc/ssl/certs/2680992.pem"
	I1202 16:17:29.991411  617021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2680992.pem
	I1202 16:17:29.996151  617021 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 15:33 /usr/share/ca-certificates/2680992.pem
	I1202 16:17:29.996212  617021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2680992.pem
	I1202 16:17:30.050503  617021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2680992.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 16:17:30.061483  617021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 16:17:30.072491  617021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:17:30.077665  617021 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 15:16 /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:17:30.077718  617021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:17:30.129682  617021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 16:17:30.140657  617021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/268099.pem && ln -fs /usr/share/ca-certificates/268099.pem /etc/ssl/certs/268099.pem"
	I1202 16:17:30.152273  617021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/268099.pem
	I1202 16:17:30.157239  617021 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 15:33 /usr/share/ca-certificates/268099.pem
	I1202 16:17:30.157304  617021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/268099.pem
	I1202 16:17:30.211554  617021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/268099.pem /etc/ssl/certs/51391683.0"
	I1202 16:17:30.223094  617021 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 16:17:30.228304  617021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 16:17:30.285622  617021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 16:17:30.343619  617021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 16:17:30.405618  617021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 16:17:30.470279  617021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 16:17:30.533815  617021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 16:17:30.599554  617021 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-806420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-806420 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 16:17:30.599678  617021 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 16:17:30.599735  617021 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 16:17:30.654880  617021 cri.go:89] found id: "dd7adc25ca0d8fd13c03d582eb1846e44e7ca31363dd13737dfcd8541ae71f4a"
	I1202 16:17:30.654952  617021 cri.go:89] found id: "85a4f9f063a689e0c01b71338ce33ac27c1c4ef5a601031762f5f6f8468c7949"
	I1202 16:17:30.654958  617021 cri.go:89] found id: "fa204ce25b4b750a274bec528d833933338cbebe536dd59bd13e8ef6cec0cb00"
	I1202 16:17:30.654963  617021 cri.go:89] found id: "e986fe28a3e21e60cd56299b5d31eb8159c847908a86b5e9049cff20903959aa"
	I1202 16:17:30.654967  617021 cri.go:89] found id: ""
	I1202 16:17:30.655019  617021 ssh_runner.go:195] Run: sudo runc list -f json
	W1202 16:17:30.673871  617021 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T16:17:30Z" level=error msg="open /run/runc: no such file or directory"
	I1202 16:17:30.673941  617021 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 16:17:30.686769  617021 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 16:17:30.686797  617021 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 16:17:30.686844  617021 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 16:17:30.699192  617021 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 16:17:30.701520  617021 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-806420" does not appear in /home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 16:17:30.702957  617021 kubeconfig.go:62] /home/jenkins/minikube-integration/22021-264555/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-806420" cluster setting kubeconfig missing "default-k8s-diff-port-806420" context setting]
	I1202 16:17:30.704478  617021 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/kubeconfig: {Name:mk809d3f43352510256b48d000241cc8ee13f80d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:17:30.707218  617021 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 16:17:30.719927  617021 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1202 16:17:30.720026  617021 kubeadm.go:602] duration metric: took 33.222622ms to restartPrimaryControlPlane
	I1202 16:17:30.720048  617021 kubeadm.go:403] duration metric: took 120.509203ms to StartCluster
	I1202 16:17:30.720091  617021 settings.go:142] acquiring lock: {Name:mkb00b5395affa5a80ee09f21cfed53b1afcd59c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:17:30.720179  617021 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 16:17:30.723308  617021 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/kubeconfig: {Name:mk809d3f43352510256b48d000241cc8ee13f80d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:17:30.723718  617021 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 16:17:30.724045  617021 config.go:182] Loaded profile config "default-k8s-diff-port-806420": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 16:17:30.724081  617021 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 16:17:30.724157  617021 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-806420"
	I1202 16:17:30.724174  617021 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-806420"
	W1202 16:17:30.724182  617021 addons.go:248] addon storage-provisioner should already be in state true
	I1202 16:17:30.724203  617021 host.go:66] Checking if "default-k8s-diff-port-806420" exists ...
	I1202 16:17:30.724727  617021 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-806420 --format={{.State.Status}}
	I1202 16:17:30.724888  617021 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-806420"
	I1202 16:17:30.724906  617021 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-806420"
	W1202 16:17:30.724915  617021 addons.go:248] addon dashboard should already be in state true
	I1202 16:17:30.724912  617021 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-806420"
	I1202 16:17:30.724939  617021 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-806420"
	I1202 16:17:30.724944  617021 host.go:66] Checking if "default-k8s-diff-port-806420" exists ...
	I1202 16:17:30.725432  617021 out.go:179] * Verifying Kubernetes components...
	I1202 16:17:30.725507  617021 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-806420 --format={{.State.Status}}
	I1202 16:17:30.725453  617021 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-806420 --format={{.State.Status}}
	I1202 16:17:30.730554  617021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 16:17:30.764253  617021 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 16:17:30.765559  617021 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1202 16:17:30.765563  617021 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 16:17:30.765773  617021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 16:17:30.765913  617021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:17:30.771476  617021 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1202 16:17:30.772748  617021 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1202 16:17:30.772772  617021 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1202 16:17:30.772833  617021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:17:30.774089  617021 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-806420"
	W1202 16:17:30.774153  617021 addons.go:248] addon default-storageclass should already be in state true
	I1202 16:17:30.774196  617021 host.go:66] Checking if "default-k8s-diff-port-806420" exists ...
	I1202 16:17:30.774739  617021 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-806420 --format={{.State.Status}}
	I1202 16:17:30.805290  617021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/default-k8s-diff-port-806420/id_rsa Username:docker}
	I1202 16:17:30.815719  617021 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 16:17:30.815744  617021 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 16:17:30.815803  617021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:17:30.818534  617021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/default-k8s-diff-port-806420/id_rsa Username:docker}
	I1202 16:17:30.847757  617021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/default-k8s-diff-port-806420/id_rsa Username:docker}
	I1202 16:17:30.983053  617021 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 16:17:31.006025  617021 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-806420" to be "Ready" ...
	I1202 16:17:31.015709  617021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 16:17:31.044129  617021 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1202 16:17:31.044161  617021 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1202 16:17:31.080007  617021 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1202 16:17:31.080035  617021 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1202 16:17:31.089152  617021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 16:17:31.105968  617021 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1202 16:17:31.105999  617021 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1202 16:17:31.125794  617021 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1202 16:17:31.125819  617021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1202 16:17:31.146432  617021 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1202 16:17:31.146461  617021 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1202 16:17:31.166977  617021 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1202 16:17:31.167010  617021 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1202 16:17:31.185493  617021 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1202 16:17:31.185536  617021 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1202 16:17:31.204002  617021 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1202 16:17:31.204034  617021 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1202 16:17:31.223408  617021 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 16:17:31.223455  617021 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1202 16:17:31.243155  617021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 16:17:30.312353  609654 pod_ready.go:104] pod "coredns-7d764666f9-fxl4s" is not "Ready", error: <nil>
	I1202 16:17:31.311117  609654 pod_ready.go:94] pod "coredns-7d764666f9-fxl4s" is "Ready"
	I1202 16:17:31.311148  609654 pod_ready.go:86] duration metric: took 32.51010024s for pod "coredns-7d764666f9-fxl4s" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:31.314691  609654 pod_ready.go:83] waiting for pod "etcd-no-preload-534842" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:31.321620  609654 pod_ready.go:94] pod "etcd-no-preload-534842" is "Ready"
	I1202 16:17:31.321651  609654 pod_ready.go:86] duration metric: took 6.872089ms for pod "etcd-no-preload-534842" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:31.324914  609654 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-534842" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:31.330629  609654 pod_ready.go:94] pod "kube-apiserver-no-preload-534842" is "Ready"
	I1202 16:17:31.330663  609654 pod_ready.go:86] duration metric: took 5.720105ms for pod "kube-apiserver-no-preload-534842" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:31.333806  609654 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-534842" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:31.505747  609654 pod_ready.go:94] pod "kube-controller-manager-no-preload-534842" is "Ready"
	I1202 16:17:31.505784  609654 pod_ready.go:86] duration metric: took 171.955168ms for pod "kube-controller-manager-no-preload-534842" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:31.705911  609654 pod_ready.go:83] waiting for pod "kube-proxy-xqnrx" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:32.105456  609654 pod_ready.go:94] pod "kube-proxy-xqnrx" is "Ready"
	I1202 16:17:32.105487  609654 pod_ready.go:86] duration metric: took 399.544466ms for pod "kube-proxy-xqnrx" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:32.306457  609654 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-534842" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:32.705260  609654 pod_ready.go:94] pod "kube-scheduler-no-preload-534842" is "Ready"
	I1202 16:17:32.705298  609654 pod_ready.go:86] duration metric: took 398.794846ms for pod "kube-scheduler-no-preload-534842" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:32.705317  609654 pod_ready.go:40] duration metric: took 33.908136514s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 16:17:32.783728  609654 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1202 16:17:32.787599  609654 out.go:179] * Done! kubectl is now configured to use "no-preload-534842" cluster and "default" namespace by default
	W1202 16:17:30.238599  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	W1202 16:17:32.744223  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	I1202 16:17:32.896889  617021 node_ready.go:49] node "default-k8s-diff-port-806420" is "Ready"
	I1202 16:17:32.896991  617021 node_ready.go:38] duration metric: took 1.890924168s for node "default-k8s-diff-port-806420" to be "Ready" ...
	I1202 16:17:32.897022  617021 api_server.go:52] waiting for apiserver process to appear ...
	I1202 16:17:32.897106  617021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 16:17:33.630628  617021 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.614880216s)
	I1202 16:17:33.630702  617021 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.541520167s)
	I1202 16:17:33.630841  617021 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.387645959s)
	I1202 16:17:33.630867  617021 api_server.go:72] duration metric: took 2.907113913s to wait for apiserver process to appear ...
	I1202 16:17:33.630880  617021 api_server.go:88] waiting for apiserver healthz status ...
	I1202 16:17:33.630901  617021 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1202 16:17:33.633116  617021 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-806420 addons enable metrics-server
	
	I1202 16:17:33.635678  617021 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 16:17:33.635702  617021 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 16:17:33.639966  617021 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1202 16:17:33.641004  617021 addons.go:530] duration metric: took 2.916912715s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1202 16:17:34.131947  617021 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1202 16:17:34.137470  617021 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1202 16:17:34.138943  617021 api_server.go:141] control plane version: v1.34.2
	I1202 16:17:34.139019  617021 api_server.go:131] duration metric: took 508.129517ms to wait for apiserver health ...
	I1202 16:17:34.139043  617021 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 16:17:34.144346  617021 system_pods.go:59] 8 kube-system pods found
	I1202 16:17:34.144412  617021 system_pods.go:61] "coredns-66bc5c9577-6h6nr" [7c832d8c-99dc-4663-a386-c48abaf9335e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 16:17:34.144438  617021 system_pods.go:61] "etcd-default-k8s-diff-port-806420" [e47c28bd-c4ac-417c-92e4-2ed52662c35b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 16:17:34.144453  617021 system_pods.go:61] "kindnet-pc8st" [17b96563-2832-47ee-9d04-8e27db1a3c6b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1202 16:17:34.144461  617021 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-806420" [44c28fe6-dea2-4f64-989d-d69480bc7988] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 16:17:34.144472  617021 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-806420" [6e6342da-debb-4021-8cb1-adec092a866a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 16:17:34.144482  617021 system_pods.go:61] "kube-proxy-574km" [3766b4e1-7e00-4229-99a3-9eec486a3437] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1202 16:17:34.144495  617021 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-806420" [14951142-9cb5-4cf8-a095-d45123ec49da] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 16:17:34.144502  617021 system_pods.go:61] "storage-provisioner" [b3d4301c-a3b1-4c90-bb80-045b48b75011] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 16:17:34.144515  617021 system_pods.go:74] duration metric: took 5.454658ms to wait for pod list to return data ...
	I1202 16:17:34.144526  617021 default_sa.go:34] waiting for default service account to be created ...
	I1202 16:17:34.147568  617021 default_sa.go:45] found service account: "default"
	I1202 16:17:34.147593  617021 default_sa.go:55] duration metric: took 3.053699ms for default service account to be created ...
	I1202 16:17:34.147604  617021 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 16:17:34.151209  617021 system_pods.go:86] 8 kube-system pods found
	I1202 16:17:34.151246  617021 system_pods.go:89] "coredns-66bc5c9577-6h6nr" [7c832d8c-99dc-4663-a386-c48abaf9335e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 16:17:34.151258  617021 system_pods.go:89] "etcd-default-k8s-diff-port-806420" [e47c28bd-c4ac-417c-92e4-2ed52662c35b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 16:17:34.151270  617021 system_pods.go:89] "kindnet-pc8st" [17b96563-2832-47ee-9d04-8e27db1a3c6b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1202 16:17:34.151280  617021 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-806420" [44c28fe6-dea2-4f64-989d-d69480bc7988] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 16:17:34.151291  617021 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-806420" [6e6342da-debb-4021-8cb1-adec092a866a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 16:17:34.151299  617021 system_pods.go:89] "kube-proxy-574km" [3766b4e1-7e00-4229-99a3-9eec486a3437] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1202 16:17:34.151307  617021 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-806420" [14951142-9cb5-4cf8-a095-d45123ec49da] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 16:17:34.151315  617021 system_pods.go:89] "storage-provisioner" [b3d4301c-a3b1-4c90-bb80-045b48b75011] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 16:17:34.151325  617021 system_pods.go:126] duration metric: took 3.713746ms to wait for k8s-apps to be running ...
	I1202 16:17:34.151335  617021 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 16:17:34.151394  617021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 16:17:34.170938  617021 system_svc.go:56] duration metric: took 19.587588ms WaitForService to wait for kubelet
	I1202 16:17:34.170990  617021 kubeadm.go:587] duration metric: took 3.447228899s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 16:17:34.171017  617021 node_conditions.go:102] verifying NodePressure condition ...
	I1202 16:17:34.176230  617021 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 16:17:34.176264  617021 node_conditions.go:123] node cpu capacity is 8
	I1202 16:17:34.176284  617021 node_conditions.go:105] duration metric: took 5.260608ms to run NodePressure ...
	I1202 16:17:34.176300  617021 start.go:242] waiting for startup goroutines ...
	I1202 16:17:34.176309  617021 start.go:247] waiting for cluster config update ...
	I1202 16:17:34.176324  617021 start.go:256] writing updated cluster config ...
	I1202 16:17:34.176722  617021 ssh_runner.go:195] Run: rm -f paused
	I1202 16:17:34.181758  617021 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 16:17:34.185626  617021 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6h6nr" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 16:17:36.191101  617021 pod_ready.go:104] pod "coredns-66bc5c9577-6h6nr" is not "Ready", error: <nil>
	W1202 16:17:35.233349  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	W1202 16:17:37.234098  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	W1202 16:17:38.191695  617021 pod_ready.go:104] pod "coredns-66bc5c9577-6h6nr" is not "Ready", error: <nil>
	W1202 16:17:40.691815  617021 pod_ready.go:104] pod "coredns-66bc5c9577-6h6nr" is not "Ready", error: <nil>
	W1202 16:17:39.234621  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	W1202 16:17:41.734966  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	W1202 16:17:42.693258  617021 pod_ready.go:104] pod "coredns-66bc5c9577-6h6nr" is not "Ready", error: <nil>
	W1202 16:17:45.191908  617021 pod_ready.go:104] pod "coredns-66bc5c9577-6h6nr" is not "Ready", error: <nil>
	W1202 16:17:47.192345  617021 pod_ready.go:104] pod "coredns-66bc5c9577-6h6nr" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 02 16:17:09 no-preload-534842 crio[562]: time="2025-12-02T16:17:09.434185977Z" level=info msg="Started container" PID=1745 containerID=ff03d8794fed2985d00df11059baabf61951e5d8c86cf6e9f5ad7e6e8760bd6d description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nvld6/dashboard-metrics-scraper id=3c151391-a41d-4af3-9cf2-a836416ba487 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e43a64505959da73cd8225cb013cd04884475f35991d0d2d47d20d06ab67328e
	Dec 02 16:17:10 no-preload-534842 crio[562]: time="2025-12-02T16:17:10.374771815Z" level=info msg="Removing container: ad458d08c2f22d674445230559d2036ccc3122e74daf745f83fd436c3110a701" id=cbdd74ec-9304-41da-ba61-ca54bbd90ffa name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 16:17:10 no-preload-534842 crio[562]: time="2025-12-02T16:17:10.385578602Z" level=info msg="Removed container ad458d08c2f22d674445230559d2036ccc3122e74daf745f83fd436c3110a701: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nvld6/dashboard-metrics-scraper" id=cbdd74ec-9304-41da-ba61-ca54bbd90ffa name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 16:17:20 no-preload-534842 crio[562]: time="2025-12-02T16:17:20.294128225Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=6e5bb814-92e9-446f-8eef-c6c54cd09088 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:17:20 no-preload-534842 crio[562]: time="2025-12-02T16:17:20.29736234Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a7704cf6-1cc3-4527-bfe4-bc5f46c61ae7 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:17:20 no-preload-534842 crio[562]: time="2025-12-02T16:17:20.300721718Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nvld6/dashboard-metrics-scraper" id=4bf66704-04d1-4bba-bd12-ce8c37830af0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:17:20 no-preload-534842 crio[562]: time="2025-12-02T16:17:20.300869531Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:17:20 no-preload-534842 crio[562]: time="2025-12-02T16:17:20.309769346Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:17:20 no-preload-534842 crio[562]: time="2025-12-02T16:17:20.31043557Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:17:20 no-preload-534842 crio[562]: time="2025-12-02T16:17:20.337170001Z" level=info msg="Created container 3b9195698fbf49b96486aaf6a6ca745c7778e0865f5cd999c2b76324299e3afc: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nvld6/dashboard-metrics-scraper" id=4bf66704-04d1-4bba-bd12-ce8c37830af0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:17:20 no-preload-534842 crio[562]: time="2025-12-02T16:17:20.338756276Z" level=info msg="Starting container: 3b9195698fbf49b96486aaf6a6ca745c7778e0865f5cd999c2b76324299e3afc" id=6995bc99-a0e4-461d-a4ca-3a783e27cc32 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 16:17:20 no-preload-534842 crio[562]: time="2025-12-02T16:17:20.341525933Z" level=info msg="Started container" PID=1756 containerID=3b9195698fbf49b96486aaf6a6ca745c7778e0865f5cd999c2b76324299e3afc description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nvld6/dashboard-metrics-scraper id=6995bc99-a0e4-461d-a4ca-3a783e27cc32 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e43a64505959da73cd8225cb013cd04884475f35991d0d2d47d20d06ab67328e
	Dec 02 16:17:20 no-preload-534842 crio[562]: time="2025-12-02T16:17:20.404969976Z" level=info msg="Removing container: ff03d8794fed2985d00df11059baabf61951e5d8c86cf6e9f5ad7e6e8760bd6d" id=fbec879d-bca7-4367-84ea-be7a60007f83 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 16:17:20 no-preload-534842 crio[562]: time="2025-12-02T16:17:20.418031673Z" level=info msg="Removed container ff03d8794fed2985d00df11059baabf61951e5d8c86cf6e9f5ad7e6e8760bd6d: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nvld6/dashboard-metrics-scraper" id=fbec879d-bca7-4367-84ea-be7a60007f83 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 16:17:44 no-preload-534842 crio[562]: time="2025-12-02T16:17:44.293819471Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e78b4706-3d7f-463e-8854-bdd3120f035d name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:17:44 no-preload-534842 crio[562]: time="2025-12-02T16:17:44.294856865Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2c278448-3ab4-4bad-a462-0d80cb683ae6 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:17:44 no-preload-534842 crio[562]: time="2025-12-02T16:17:44.29607642Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nvld6/dashboard-metrics-scraper" id=ef5e5fed-0a9d-444d-8440-6fce7024709f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:17:44 no-preload-534842 crio[562]: time="2025-12-02T16:17:44.296225514Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:17:44 no-preload-534842 crio[562]: time="2025-12-02T16:17:44.302525022Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:17:44 no-preload-534842 crio[562]: time="2025-12-02T16:17:44.302974074Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:17:44 no-preload-534842 crio[562]: time="2025-12-02T16:17:44.346689592Z" level=info msg="Created container 678df9e701579d2c9bec8e97da27418a40aef0f064af7456732c5cf2c76aafa6: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nvld6/dashboard-metrics-scraper" id=ef5e5fed-0a9d-444d-8440-6fce7024709f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:17:44 no-preload-534842 crio[562]: time="2025-12-02T16:17:44.347438753Z" level=info msg="Starting container: 678df9e701579d2c9bec8e97da27418a40aef0f064af7456732c5cf2c76aafa6" id=970afce4-f53f-4e46-8668-aa89e615cb7f name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 16:17:44 no-preload-534842 crio[562]: time="2025-12-02T16:17:44.349379317Z" level=info msg="Started container" PID=1788 containerID=678df9e701579d2c9bec8e97da27418a40aef0f064af7456732c5cf2c76aafa6 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nvld6/dashboard-metrics-scraper id=970afce4-f53f-4e46-8668-aa89e615cb7f name=/runtime.v1.RuntimeService/StartContainer sandboxID=e43a64505959da73cd8225cb013cd04884475f35991d0d2d47d20d06ab67328e
	Dec 02 16:17:44 no-preload-534842 crio[562]: time="2025-12-02T16:17:44.469316308Z" level=info msg="Removing container: 3b9195698fbf49b96486aaf6a6ca745c7778e0865f5cd999c2b76324299e3afc" id=c70c00b8-4616-4d07-bf15-e9e3affd14c6 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 16:17:44 no-preload-534842 crio[562]: time="2025-12-02T16:17:44.479349023Z" level=info msg="Removed container 3b9195698fbf49b96486aaf6a6ca745c7778e0865f5cd999c2b76324299e3afc: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nvld6/dashboard-metrics-scraper" id=c70c00b8-4616-4d07-bf15-e9e3affd14c6 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	678df9e701579       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           3 seconds ago       Exited              dashboard-metrics-scraper   3                   e43a64505959d       dashboard-metrics-scraper-867fb5f87b-nvld6   kubernetes-dashboard
	4acc4581c2377       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   42 seconds ago      Running             kubernetes-dashboard        0                   cfcaf6339fd8f       kubernetes-dashboard-b84665fb8-6hz4c         kubernetes-dashboard
	784e9d9349278       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           49 seconds ago      Running             storage-provisioner         1                   6736aa3ef6085       storage-provisioner                          kube-system
	cf43888cedff5       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           50 seconds ago      Running             coredns                     0                   06aff50ff5234       coredns-7d764666f9-fxl4s                     kube-system
	ac9caed37c1bb       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   560edc2d221d0       busybox                                      default
	d11384487d38d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   6736aa3ef6085       storage-provisioner                          kube-system
	4c8eb7538dccf       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           50 seconds ago      Running             kindnet-cni                 0                   0de2248931a42       kindnet-fn84j                                kube-system
	ce137b34f41fe       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                           50 seconds ago      Running             kube-proxy                  0                   4d936749ce9ef       kube-proxy-xqnrx                             kube-system
	ef4d71f3dba7f       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           52 seconds ago      Running             etcd                        0                   69d12d8b1c5cc       etcd-no-preload-534842                       kube-system
	7f5c2cae2aa29       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                           52 seconds ago      Running             kube-apiserver              0                   66f0e30c31f25       kube-apiserver-no-preload-534842             kube-system
	44a6ec8649ccb       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                           52 seconds ago      Running             kube-controller-manager     0                   f4428ac81d74e       kube-controller-manager-no-preload-534842    kube-system
	ec6d57760ee61       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                           52 seconds ago      Running             kube-scheduler              0                   a10070b9967b6       kube-scheduler-no-preload-534842             kube-system
	
	
	==> coredns [cf43888cedff5c122573841043f9faaa886459652a505ba34085fc2cdb3a7d64] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:54621 - 17492 "HINFO IN 7032501483970489343.8395598587500903268. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.020672712s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-534842
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-534842
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=no-preload-534842
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T16_15_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 16:15:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-534842
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 16:17:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 16:17:27 +0000   Tue, 02 Dec 2025 16:15:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 16:17:27 +0000   Tue, 02 Dec 2025 16:15:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 16:17:27 +0000   Tue, 02 Dec 2025 16:15:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 16:17:27 +0000   Tue, 02 Dec 2025 16:16:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-534842
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                08e82a9a-8bf2-46c3-bfb2-1095025d0bbb
	  Boot ID:                    e00bac56-b076-4861-bc22-5d3b11269f73
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-7d764666f9-fxl4s                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-no-preload-534842                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         112s
	  kube-system                 kindnet-fn84j                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-no-preload-534842              250m (3%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-no-preload-534842     200m (2%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-xqnrx                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-no-preload-534842              100m (1%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-nvld6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-6hz4c          0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  108s  node-controller  Node no-preload-534842 event: Registered Node no-preload-534842 in Controller
	  Normal  RegisteredNode  48s   node-controller  Node no-preload-534842 event: Registered Node no-preload-534842 in Controller
	
	
	==> dmesg <==
	[  +0.000023] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[Dec 2 16:14] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ca bc 15 8e 4f 39 08 06
	[  +0.202375] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4a 25 86 21 45 76 08 06
	[  +7.441346] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 50 97 74 77 f9 08 06
	[  +0.000311] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 8c 8a 4d de f7 08 06
	[Dec 2 16:15] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 87 56 d2 46 1b 08 06
	[  +0.000909] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4a 25 86 21 45 76 08 06
	[  +7.449328] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a 06 ef 04 0a 22 08 06
	[ +17.731920] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff ae 8e 5c 48 83 60 08 06
	[  +2.165442] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0e 0b db fb 54 af 08 06
	[  +0.000320] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 3a 06 ef 04 0a 22 08 06
	[ +14.651928] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 5d 2d 15 78 ec 08 06
	[  +0.000385] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 8e 5c 48 83 60 08 06
	
	
	==> etcd [ef4d71f3dba7f249c2dccfb9492705acceca27d92b988ad3f3be8ddf967a2524] <==
	{"level":"warn","ts":"2025-12-02T16:16:56.491476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:56.497910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:56.504457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:56.513757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:56.520218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:56.526592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:56.533632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:56.540362Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:56.552535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:56.558754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:56.565475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:56.577633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:56.585898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:56.592949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:56.599678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:56.607195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:56.615093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:56.622691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:56.629937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:56.637201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:56.644914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:56.651519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:56.672749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:56.680175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:56.687378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52662","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 16:17:48 up  3:00,  0 user,  load average: 4.47, 4.19, 2.73
	Linux no-preload-534842 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4c8eb7538dccf291c0dade54352e7e1daff8f787ed7c19748a63f7a9d724cc04] <==
	I1202 16:16:57.803390       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 16:16:57.895919       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1202 16:16:57.896108       1 main.go:148] setting mtu 1500 for CNI 
	I1202 16:16:57.896126       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 16:16:57.896144       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T16:16:58Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 16:16:58.099008       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 16:16:58.099511       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 16:16:58.099572       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 16:16:58.099744       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1202 16:16:58.700340       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 16:16:58.700365       1 metrics.go:72] Registering metrics
	I1202 16:16:58.700405       1 controller.go:711] "Syncing nftables rules"
	I1202 16:17:08.099860       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1202 16:17:08.099916       1 main.go:301] handling current node
	I1202 16:17:18.100498       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1202 16:17:18.100561       1 main.go:301] handling current node
	I1202 16:17:28.099630       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1202 16:17:28.099682       1 main.go:301] handling current node
	I1202 16:17:38.105496       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1202 16:17:38.105533       1 main.go:301] handling current node
	I1202 16:17:48.101515       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1202 16:17:48.101565       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7f5c2cae2aa291edcbbe0f927b622ca7853d0323468ef1d4662a47fc47dab2a7] <==
	I1202 16:16:57.227852       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1202 16:16:57.227471       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1202 16:16:57.227881       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1202 16:16:57.227925       1 shared_informer.go:377] "Caches are synced"
	I1202 16:16:57.228029       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1202 16:16:57.228060       1 aggregator.go:187] initial CRD sync complete...
	I1202 16:16:57.228068       1 autoregister_controller.go:144] Starting autoregister controller
	I1202 16:16:57.228072       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1202 16:16:57.228075       1 cache.go:39] Caches are synced for autoregister controller
	I1202 16:16:57.228258       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1202 16:16:57.228480       1 shared_informer.go:377] "Caches are synced"
	I1202 16:16:57.233116       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1202 16:16:57.234796       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1202 16:16:57.247308       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 16:16:57.261412       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 16:16:57.517936       1 controller.go:667] quota admission added evaluator for: namespaces
	I1202 16:16:57.548596       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1202 16:16:57.576701       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 16:16:57.586524       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 16:16:57.659143       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.151.234"}
	I1202 16:16:57.678669       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.152.43"}
	I1202 16:16:58.130910       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1202 16:17:00.796001       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 16:17:00.896502       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 16:17:00.998791       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [44a6ec8649ccbb15298488aba888279a5c30ed43f97b8e65953b50f4199a5f54] <==
	I1202 16:17:00.351915       1 shared_informer.go:377] "Caches are synced"
	I1202 16:17:00.352060       1 shared_informer.go:377] "Caches are synced"
	I1202 16:17:00.352256       1 shared_informer.go:377] "Caches are synced"
	I1202 16:17:00.352338       1 shared_informer.go:377] "Caches are synced"
	I1202 16:17:00.352372       1 shared_informer.go:377] "Caches are synced"
	I1202 16:17:00.352496       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1202 16:17:00.352582       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-534842"
	I1202 16:17:00.352606       1 shared_informer.go:377] "Caches are synced"
	I1202 16:17:00.352739       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1202 16:17:00.352844       1 shared_informer.go:377] "Caches are synced"
	I1202 16:17:00.352859       1 shared_informer.go:377] "Caches are synced"
	I1202 16:17:00.353101       1 shared_informer.go:377] "Caches are synced"
	I1202 16:17:00.352879       1 shared_informer.go:377] "Caches are synced"
	I1202 16:17:00.352895       1 shared_informer.go:377] "Caches are synced"
	I1202 16:17:00.353287       1 shared_informer.go:377] "Caches are synced"
	I1202 16:17:00.355816       1 shared_informer.go:377] "Caches are synced"
	I1202 16:17:00.355861       1 shared_informer.go:377] "Caches are synced"
	I1202 16:17:00.356184       1 shared_informer.go:377] "Caches are synced"
	I1202 16:17:00.356733       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 16:17:00.357020       1 shared_informer.go:377] "Caches are synced"
	I1202 16:17:00.361819       1 shared_informer.go:377] "Caches are synced"
	I1202 16:17:00.457140       1 shared_informer.go:377] "Caches are synced"
	I1202 16:17:00.457163       1 shared_informer.go:377] "Caches are synced"
	I1202 16:17:00.457178       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1202 16:17:00.457185       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [ce137b34f41fe8fd3b9b895d8913ee21b506dd0abb93c65e3d35f67ee4dbad78] <==
	I1202 16:16:57.702526       1 server_linux.go:53] "Using iptables proxy"
	I1202 16:16:57.771756       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 16:16:57.872824       1 shared_informer.go:377] "Caches are synced"
	I1202 16:16:57.872867       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1202 16:16:57.872979       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 16:16:57.892224       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 16:16:57.892275       1 server_linux.go:136] "Using iptables Proxier"
	I1202 16:16:57.897414       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 16:16:57.897830       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1202 16:16:57.897894       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 16:16:57.899412       1 config.go:200] "Starting service config controller"
	I1202 16:16:57.899654       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 16:16:57.899491       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 16:16:57.899685       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 16:16:57.899491       1 config.go:106] "Starting endpoint slice config controller"
	I1202 16:16:57.899698       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 16:16:57.899861       1 config.go:309] "Starting node config controller"
	I1202 16:16:57.899961       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 16:16:57.899988       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 16:16:57.999855       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 16:16:57.999858       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 16:16:57.999891       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ec6d57760ee61c8da2007c23b76750466cdaa245ef7a003ac8ccc74510f7bd2e] <==
	I1202 16:16:56.202453       1 serving.go:386] Generated self-signed cert in-memory
	W1202 16:16:57.153937       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1202 16:16:57.153975       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1202 16:16:57.153988       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1202 16:16:57.153997       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1202 16:16:57.183747       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1202 16:16:57.183794       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 16:16:57.186712       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 16:16:57.186764       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 16:16:57.186942       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1202 16:16:57.187061       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1202 16:16:57.287132       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 02 16:17:15 no-preload-534842 kubelet[714]: E1202 16:17:15.387261     714 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-534842" containerName="etcd"
	Dec 02 16:17:15 no-preload-534842 kubelet[714]: E1202 16:17:15.387390     714 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-534842" containerName="kube-scheduler"
	Dec 02 16:17:18 no-preload-534842 kubelet[714]: E1202 16:17:18.318200     714 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nvld6" containerName="dashboard-metrics-scraper"
	Dec 02 16:17:18 no-preload-534842 kubelet[714]: I1202 16:17:18.318237     714 scope.go:122] "RemoveContainer" containerID="ff03d8794fed2985d00df11059baabf61951e5d8c86cf6e9f5ad7e6e8760bd6d"
	Dec 02 16:17:18 no-preload-534842 kubelet[714]: E1202 16:17:18.318397     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-nvld6_kubernetes-dashboard(a4a63e16-a516-47d2-8bee-ed321517b392)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nvld6" podUID="a4a63e16-a516-47d2-8bee-ed321517b392"
	Dec 02 16:17:20 no-preload-534842 kubelet[714]: E1202 16:17:20.293333     714 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nvld6" containerName="dashboard-metrics-scraper"
	Dec 02 16:17:20 no-preload-534842 kubelet[714]: I1202 16:17:20.293385     714 scope.go:122] "RemoveContainer" containerID="ff03d8794fed2985d00df11059baabf61951e5d8c86cf6e9f5ad7e6e8760bd6d"
	Dec 02 16:17:20 no-preload-534842 kubelet[714]: I1202 16:17:20.402343     714 scope.go:122] "RemoveContainer" containerID="ff03d8794fed2985d00df11059baabf61951e5d8c86cf6e9f5ad7e6e8760bd6d"
	Dec 02 16:17:20 no-preload-534842 kubelet[714]: E1202 16:17:20.402480     714 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nvld6" containerName="dashboard-metrics-scraper"
	Dec 02 16:17:20 no-preload-534842 kubelet[714]: I1202 16:17:20.402502     714 scope.go:122] "RemoveContainer" containerID="3b9195698fbf49b96486aaf6a6ca745c7778e0865f5cd999c2b76324299e3afc"
	Dec 02 16:17:20 no-preload-534842 kubelet[714]: E1202 16:17:20.402708     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-nvld6_kubernetes-dashboard(a4a63e16-a516-47d2-8bee-ed321517b392)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nvld6" podUID="a4a63e16-a516-47d2-8bee-ed321517b392"
	Dec 02 16:17:28 no-preload-534842 kubelet[714]: E1202 16:17:28.317669     714 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nvld6" containerName="dashboard-metrics-scraper"
	Dec 02 16:17:28 no-preload-534842 kubelet[714]: I1202 16:17:28.317721     714 scope.go:122] "RemoveContainer" containerID="3b9195698fbf49b96486aaf6a6ca745c7778e0865f5cd999c2b76324299e3afc"
	Dec 02 16:17:28 no-preload-534842 kubelet[714]: E1202 16:17:28.317967     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-nvld6_kubernetes-dashboard(a4a63e16-a516-47d2-8bee-ed321517b392)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nvld6" podUID="a4a63e16-a516-47d2-8bee-ed321517b392"
	Dec 02 16:17:30 no-preload-534842 kubelet[714]: E1202 16:17:30.863329     714 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-fxl4s" containerName="coredns"
	Dec 02 16:17:44 no-preload-534842 kubelet[714]: E1202 16:17:44.293185     714 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nvld6" containerName="dashboard-metrics-scraper"
	Dec 02 16:17:44 no-preload-534842 kubelet[714]: I1202 16:17:44.293225     714 scope.go:122] "RemoveContainer" containerID="3b9195698fbf49b96486aaf6a6ca745c7778e0865f5cd999c2b76324299e3afc"
	Dec 02 16:17:44 no-preload-534842 kubelet[714]: I1202 16:17:44.467904     714 scope.go:122] "RemoveContainer" containerID="3b9195698fbf49b96486aaf6a6ca745c7778e0865f5cd999c2b76324299e3afc"
	Dec 02 16:17:44 no-preload-534842 kubelet[714]: E1202 16:17:44.468110     714 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nvld6" containerName="dashboard-metrics-scraper"
	Dec 02 16:17:44 no-preload-534842 kubelet[714]: I1202 16:17:44.468147     714 scope.go:122] "RemoveContainer" containerID="678df9e701579d2c9bec8e97da27418a40aef0f064af7456732c5cf2c76aafa6"
	Dec 02 16:17:44 no-preload-534842 kubelet[714]: E1202 16:17:44.468346     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-nvld6_kubernetes-dashboard(a4a63e16-a516-47d2-8bee-ed321517b392)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nvld6" podUID="a4a63e16-a516-47d2-8bee-ed321517b392"
	Dec 02 16:17:45 no-preload-534842 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 02 16:17:45 no-preload-534842 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 02 16:17:45 no-preload-534842 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 16:17:45 no-preload-534842 systemd[1]: kubelet.service: Consumed 1.712s CPU time.
	
	
	==> kubernetes-dashboard [4acc4581c23774d9b9ae826d1cebbf7a4ab0f3eb613cad13a717ce4d3ceb6947] <==
	2025/12/02 16:17:05 Using namespace: kubernetes-dashboard
	2025/12/02 16:17:05 Using in-cluster config to connect to apiserver
	2025/12/02 16:17:05 Using secret token for csrf signing
	2025/12/02 16:17:05 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/02 16:17:05 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/02 16:17:05 Successful initial request to the apiserver, version: v1.35.0-beta.0
	2025/12/02 16:17:05 Generating JWE encryption key
	2025/12/02 16:17:05 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/02 16:17:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/02 16:17:06 Initializing JWE encryption key from synchronized object
	2025/12/02 16:17:06 Creating in-cluster Sidecar client
	2025/12/02 16:17:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/02 16:17:06 Serving insecurely on HTTP port: 9090
	2025/12/02 16:17:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/02 16:17:05 Starting overwatch
	
	
	==> storage-provisioner [784e9d934927898b20c9e43c22133906438a1575abb416ae016ebfe0b2444f19] <==
	W1202 16:17:23.829229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:25.832493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:25.836945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:27.841229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:27.847895       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:29.852191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:29.856592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:31.862417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:31.869846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:33.873541       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:33.877533       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:35.881062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:35.884684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:37.888301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:37.894825       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:39.898969       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:39.905695       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:41.910051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:41.914717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:43.917682       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:43.921807       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:45.925508       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:45.929697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:47.933364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:47.938314       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [d11384487d38dcb6fc74940486755eb9bd08fc8a3d4b5841e9a6d5f50afe8f69] <==
	I1202 16:16:57.665541       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1202 16:16:57.667804       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-534842 -n no-preload-534842
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-534842 -n no-preload-534842: exit status 2 (376.941164ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-534842 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-534842
helpers_test.go:243: (dbg) docker inspect no-preload-534842:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a2904e47fdbbff69e8d2d0c47a8f31f01acd643bd1e30ffb705ef9b28bc00aaa",
	        "Created": "2025-12-02T16:15:33.245538199Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 609846,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T16:16:48.793012546Z",
	            "FinishedAt": "2025-12-02T16:16:47.74267679Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/a2904e47fdbbff69e8d2d0c47a8f31f01acd643bd1e30ffb705ef9b28bc00aaa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a2904e47fdbbff69e8d2d0c47a8f31f01acd643bd1e30ffb705ef9b28bc00aaa/hostname",
	        "HostsPath": "/var/lib/docker/containers/a2904e47fdbbff69e8d2d0c47a8f31f01acd643bd1e30ffb705ef9b28bc00aaa/hosts",
	        "LogPath": "/var/lib/docker/containers/a2904e47fdbbff69e8d2d0c47a8f31f01acd643bd1e30ffb705ef9b28bc00aaa/a2904e47fdbbff69e8d2d0c47a8f31f01acd643bd1e30ffb705ef9b28bc00aaa-json.log",
	        "Name": "/no-preload-534842",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-534842:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-534842",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a2904e47fdbbff69e8d2d0c47a8f31f01acd643bd1e30ffb705ef9b28bc00aaa",
	                "LowerDir": "/var/lib/docker/overlay2/6a4b4e52df8b38f90b870d8719f5a1cc2a12c6d10fe8621038d418792a62b0c1-init/diff:/var/lib/docker/overlay2/ab98578cee54140c21ba2edb7c02601b9799fbaa027f05ce4daaae66d198c082/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6a4b4e52df8b38f90b870d8719f5a1cc2a12c6d10fe8621038d418792a62b0c1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6a4b4e52df8b38f90b870d8719f5a1cc2a12c6d10fe8621038d418792a62b0c1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6a4b4e52df8b38f90b870d8719f5a1cc2a12c6d10fe8621038d418792a62b0c1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-534842",
	                "Source": "/var/lib/docker/volumes/no-preload-534842/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-534842",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-534842",
	                "name.minikube.sigs.k8s.io": "no-preload-534842",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "dc31dfe595381e2429266146ec958d8a21cf5e38069c217b35d005683f1c1f94",
	            "SandboxKey": "/var/run/docker/netns/dc31dfe59538",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33245"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33246"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33249"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33247"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33248"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-534842": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "26f54f8ab80db170a83b2dc1c670501109df1b38f3efe9f0b57bf1b09b594ad5",
	                    "EndpointID": "8caede9995c860fc48df33243ff694d1ec3b8ce94fb92d1712c89b57c70691d4",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "4e:c4:d6:d7:b7:09",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-534842",
	                        "a2904e47fdbb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-534842 -n no-preload-534842
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-534842 -n no-preload-534842: exit status 2 (371.618171ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-534842 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-534842 logs -n 25: (1.176757933s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-589300 sudo crio config                                                                                                                                                                                                             │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ delete  │ -p bridge-589300                                                                                                                                                                                                                              │ bridge-589300                │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ delete  │ -p disable-driver-mounts-904481                                                                                                                                                                                                               │ disable-driver-mounts-904481 │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ start   │ -p default-k8s-diff-port-806420 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-380588 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	│ stop    │ -p old-k8s-version-380588 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ addons  │ enable metrics-server -p no-preload-534842 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	│ stop    │ -p no-preload-534842 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-380588 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ start   │ -p old-k8s-version-380588 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:17 UTC │
	│ addons  │ enable dashboard -p no-preload-534842 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ start   │ -p no-preload-534842 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:17 UTC │
	│ addons  │ enable metrics-server -p embed-certs-046271 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	│ stop    │ -p embed-certs-046271 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:17 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-806420 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-806420 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-046271 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ start   │ -p embed-certs-046271 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-806420 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ start   │ -p default-k8s-diff-port-806420 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │                     │
	│ image   │ old-k8s-version-380588 image list --format=json                                                                                                                                                                                               │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ pause   │ -p old-k8s-version-380588 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │                     │
	│ image   │ no-preload-534842 image list --format=json                                                                                                                                                                                                    │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ pause   │ -p no-preload-534842 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │                     │
	│ delete  │ -p old-k8s-version-380588                                                                                                                                                                                                                     │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 16:17:22
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 16:17:22.498316  617021 out.go:360] Setting OutFile to fd 1 ...
	I1202 16:17:22.498682  617021 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:17:22.498698  617021 out.go:374] Setting ErrFile to fd 2...
	I1202 16:17:22.498706  617021 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:17:22.499020  617021 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 16:17:22.499708  617021 out.go:368] Setting JSON to false
	I1202 16:17:22.501327  617021 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":10783,"bootTime":1764681459,"procs":363,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 16:17:22.501399  617021 start.go:143] virtualization: kvm guest
	I1202 16:17:22.505282  617021 out.go:179] * [default-k8s-diff-port-806420] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 16:17:22.506595  617021 notify.go:221] Checking for updates...
	I1202 16:17:22.506646  617021 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 16:17:22.507981  617021 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 16:17:22.509145  617021 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 16:17:22.510227  617021 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-264555/.minikube
	I1202 16:17:22.511263  617021 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 16:17:22.512202  617021 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 16:17:22.513803  617021 config.go:182] Loaded profile config "default-k8s-diff-port-806420": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 16:17:22.514580  617021 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 16:17:22.546450  617021 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 16:17:22.546572  617021 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 16:17:22.614629  617021 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-02 16:17:22.602669456 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 16:17:22.614775  617021 docker.go:319] overlay module found
	I1202 16:17:22.616372  617021 out.go:179] * Using the docker driver based on existing profile
	I1202 16:17:20.554206  615191 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1202 16:17:20.554226  615191 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1202 16:17:20.554286  615191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-046271
	I1202 16:17:20.578798  615191 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 16:17:20.578835  615191 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 16:17:20.578900  615191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-046271
	I1202 16:17:20.590547  615191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33250 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/embed-certs-046271/id_rsa Username:docker}
	I1202 16:17:20.597866  615191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33250 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/embed-certs-046271/id_rsa Username:docker}
	I1202 16:17:20.608006  615191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33250 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/embed-certs-046271/id_rsa Username:docker}
	I1202 16:17:20.696829  615191 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 16:17:20.711938  615191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 16:17:20.715717  615191 node_ready.go:35] waiting up to 6m0s for node "embed-certs-046271" to be "Ready" ...
	I1202 16:17:20.724206  615191 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1202 16:17:20.724236  615191 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1202 16:17:20.733876  615191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 16:17:20.741340  615191 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1202 16:17:20.741367  615191 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1202 16:17:20.760344  615191 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1202 16:17:20.760372  615191 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1202 16:17:20.777477  615191 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1202 16:17:20.777507  615191 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1202 16:17:20.794322  615191 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1202 16:17:20.794352  615191 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1202 16:17:20.812771  615191 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1202 16:17:20.812806  615191 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1202 16:17:20.827575  615191 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1202 16:17:20.827606  615191 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1202 16:17:20.843608  615191 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1202 16:17:20.843637  615191 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1202 16:17:20.858834  615191 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 16:17:20.858862  615191 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1202 16:17:20.877363  615191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 16:17:22.050597  615191 node_ready.go:49] node "embed-certs-046271" is "Ready"
	I1202 16:17:22.050643  615191 node_ready.go:38] duration metric: took 1.334887125s for node "embed-certs-046271" to be "Ready" ...
	I1202 16:17:22.050670  615191 api_server.go:52] waiting for apiserver process to appear ...
	I1202 16:17:22.050729  615191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 16:17:22.687464  615191 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.975454995s)
	I1202 16:17:22.687522  615191 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.953605693s)
	I1202 16:17:22.687655  615191 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.810242956s)
	I1202 16:17:22.687712  615191 api_server.go:72] duration metric: took 2.165624029s to wait for apiserver process to appear ...
	I1202 16:17:22.617494  617021 start.go:309] selected driver: docker
	I1202 16:17:22.617510  617021 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-806420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-806420 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 16:17:22.617607  617021 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 16:17:22.618289  617021 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 16:17:22.687951  617021 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-02 16:17:22.676818567 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 16:17:22.688331  617021 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 16:17:22.688378  617021 cni.go:84] Creating CNI manager for ""
	I1202 16:17:22.688459  617021 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 16:17:22.688539  617021 start.go:353] cluster config:
	{Name:default-k8s-diff-port-806420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-806420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 16:17:22.687737  615191 api_server.go:88] waiting for apiserver healthz status ...
	I1202 16:17:22.687841  615191 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1202 16:17:22.689323  615191 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-046271 addons enable metrics-server
	
	I1202 16:17:22.690518  617021 out.go:179] * Starting "default-k8s-diff-port-806420" primary control-plane node in "default-k8s-diff-port-806420" cluster
	I1202 16:17:22.691442  617021 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 16:17:22.692381  617021 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 16:17:22.696323  615191 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 16:17:22.696349  615191 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 16:17:22.701692  615191 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1202 16:17:22.693673  617021 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 16:17:22.693741  617021 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 16:17:22.693782  617021 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22021-264555/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1202 16:17:22.693799  617021 cache.go:65] Caching tarball of preloaded images
	I1202 16:17:22.693901  617021 preload.go:238] Found /home/jenkins/minikube-integration/22021-264555/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 16:17:22.693915  617021 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 16:17:22.694040  617021 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/config.json ...
	I1202 16:17:22.717168  617021 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 16:17:22.717185  617021 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 16:17:22.717204  617021 cache.go:243] Successfully downloaded all kic artifacts
	I1202 16:17:22.717240  617021 start.go:360] acquireMachinesLock for default-k8s-diff-port-806420: {Name:mk8a961b68c6bbf9b1910f8ae43c90e49f86c0f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:17:22.717306  617021 start.go:364] duration metric: took 43.2µs to acquireMachinesLock for "default-k8s-diff-port-806420"
	I1202 16:17:22.717329  617021 start.go:96] Skipping create...Using existing machine configuration
	I1202 16:17:22.717337  617021 fix.go:54] fixHost starting: 
	I1202 16:17:22.717575  617021 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-806420 --format={{.State.Status}}
	I1202 16:17:22.736168  617021 fix.go:112] recreateIfNeeded on default-k8s-diff-port-806420: state=Stopped err=<nil>
	W1202 16:17:22.736197  617021 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 16:17:22.702818  615191 addons.go:530] duration metric: took 2.180728191s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1202 16:17:23.187965  615191 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1202 16:17:23.202226  615191 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 16:17:23.202260  615191 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 16:17:19.307997  609654 pod_ready.go:104] pod "coredns-7d764666f9-fxl4s" is not "Ready", error: <nil>
	W1202 16:17:21.806201  609654 pod_ready.go:104] pod "coredns-7d764666f9-fxl4s" is not "Ready", error: <nil>
	W1202 16:17:20.509898  607516 pod_ready.go:104] pod "coredns-5dd5756b68-fsfh2" is not "Ready", error: <nil>
	W1202 16:17:22.511187  607516 pod_ready.go:104] pod "coredns-5dd5756b68-fsfh2" is not "Ready", error: <nil>
	W1202 16:17:25.009769  607516 pod_ready.go:104] pod "coredns-5dd5756b68-fsfh2" is not "Ready", error: <nil>
	I1202 16:17:22.738049  617021 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-806420" ...
	I1202 16:17:22.738131  617021 cli_runner.go:164] Run: docker start default-k8s-diff-port-806420
	I1202 16:17:23.056389  617021 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-806420 --format={{.State.Status}}
	I1202 16:17:23.080845  617021 kic.go:430] container "default-k8s-diff-port-806420" state is running.
	I1202 16:17:23.081352  617021 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-806420
	I1202 16:17:23.104364  617021 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/config.json ...
	I1202 16:17:23.104731  617021 machine.go:94] provisionDockerMachine start ...
	I1202 16:17:23.104810  617021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:17:23.132129  617021 main.go:143] libmachine: Using SSH client type: native
	I1202 16:17:23.132593  617021 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33255 <nil> <nil>}
	I1202 16:17:23.132615  617021 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 16:17:23.133560  617021 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44490->127.0.0.1:33255: read: connection reset by peer
	I1202 16:17:26.278234  617021 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-806420
	
	I1202 16:17:26.278279  617021 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-806420"
	I1202 16:17:26.278370  617021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:17:26.298722  617021 main.go:143] libmachine: Using SSH client type: native
	I1202 16:17:26.298946  617021 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33255 <nil> <nil>}
	I1202 16:17:26.298961  617021 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-806420 && echo "default-k8s-diff-port-806420" | sudo tee /etc/hostname
	I1202 16:17:26.455925  617021 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-806420
	
	I1202 16:17:26.456010  617021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:17:26.475742  617021 main.go:143] libmachine: Using SSH client type: native
	I1202 16:17:26.476020  617021 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33255 <nil> <nil>}
	I1202 16:17:26.476041  617021 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-806420' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-806420/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-806420' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 16:17:26.621706  617021 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 16:17:26.621744  617021 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-264555/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-264555/.minikube}
	I1202 16:17:26.621776  617021 ubuntu.go:190] setting up certificates
	I1202 16:17:26.621791  617021 provision.go:84] configureAuth start
	I1202 16:17:26.621871  617021 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-806420
	I1202 16:17:26.646855  617021 provision.go:143] copyHostCerts
	I1202 16:17:26.646932  617021 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem, removing ...
	I1202 16:17:26.646949  617021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem
	I1202 16:17:26.647023  617021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem (1123 bytes)
	I1202 16:17:26.647146  617021 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem, removing ...
	I1202 16:17:26.647160  617021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem
	I1202 16:17:26.647202  617021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem (1675 bytes)
	I1202 16:17:26.647293  617021 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem, removing ...
	I1202 16:17:26.647305  617021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem
	I1202 16:17:26.647345  617021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem (1082 bytes)
	I1202 16:17:26.647443  617021 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-806420 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-806420 localhost minikube]
	I1202 16:17:26.754337  617021 provision.go:177] copyRemoteCerts
	I1202 16:17:26.754415  617021 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 16:17:26.754477  617021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:17:26.777385  617021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/default-k8s-diff-port-806420/id_rsa Username:docker}
	I1202 16:17:26.893005  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1202 16:17:26.918128  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 16:17:26.944489  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 16:17:26.970311  617021 provision.go:87] duration metric: took 348.497825ms to configureAuth
	I1202 16:17:26.970349  617021 ubuntu.go:206] setting minikube options for container-runtime
	I1202 16:17:26.970597  617021 config.go:182] Loaded profile config "default-k8s-diff-port-806420": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 16:17:26.970740  617021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:17:26.995213  617021 main.go:143] libmachine: Using SSH client type: native
	I1202 16:17:26.995551  617021 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33255 <nil> <nil>}
	I1202 16:17:26.995581  617021 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 16:17:23.688681  615191 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1202 16:17:23.693093  615191 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1202 16:17:23.694079  615191 api_server.go:141] control plane version: v1.34.2
	I1202 16:17:23.694104  615191 api_server.go:131] duration metric: took 1.006283162s to wait for apiserver health ...
	I1202 16:17:23.694113  615191 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 16:17:23.697817  615191 system_pods.go:59] 8 kube-system pods found
	I1202 16:17:23.697855  615191 system_pods.go:61] "coredns-66bc5c9577-f2vhx" [364e193c-f53a-4a43-b365-fe8364c3bd0f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 16:17:23.697865  615191 system_pods.go:61] "etcd-embed-certs-046271" [5b715b6b-8154-4ca8-9dc1-795be52cb8b2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 16:17:23.697876  615191 system_pods.go:61] "kindnet-wpj6k" [9249e8d2-e10c-4cae-bf04-cbf331109cf5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1202 16:17:23.697883  615191 system_pods.go:61] "kube-apiserver-embed-certs-046271" [f87f3619-f513-463f-bb69-acf168ec4ed0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 16:17:23.697892  615191 system_pods.go:61] "kube-controller-manager-embed-certs-046271" [bbdde76a-6098-496b-aaeb-2d61a714017a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 16:17:23.697899  615191 system_pods.go:61] "kube-proxy-q9pxb" [85574988-c836-4351-80bf-92683e782d91] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1202 16:17:23.697905  615191 system_pods.go:61] "kube-scheduler-embed-certs-046271" [d3b40c19-3363-443d-93f9-d2789b47d291] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 16:17:23.697910  615191 system_pods.go:61] "storage-provisioner" [5a625bd8-b8b8-4abc-b86a-d39218c7ffe3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 16:17:23.697918  615191 system_pods.go:74] duration metric: took 3.801084ms to wait for pod list to return data ...
	I1202 16:17:23.697926  615191 default_sa.go:34] waiting for default service account to be created ...
	I1202 16:17:23.700382  615191 default_sa.go:45] found service account: "default"
	I1202 16:17:23.700399  615191 default_sa.go:55] duration metric: took 2.466186ms for default service account to be created ...
	I1202 16:17:23.700407  615191 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 16:17:23.703139  615191 system_pods.go:86] 8 kube-system pods found
	I1202 16:17:23.703167  615191 system_pods.go:89] "coredns-66bc5c9577-f2vhx" [364e193c-f53a-4a43-b365-fe8364c3bd0f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 16:17:23.703178  615191 system_pods.go:89] "etcd-embed-certs-046271" [5b715b6b-8154-4ca8-9dc1-795be52cb8b2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 16:17:23.703189  615191 system_pods.go:89] "kindnet-wpj6k" [9249e8d2-e10c-4cae-bf04-cbf331109cf5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1202 16:17:23.703199  615191 system_pods.go:89] "kube-apiserver-embed-certs-046271" [f87f3619-f513-463f-bb69-acf168ec4ed0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 16:17:23.703214  615191 system_pods.go:89] "kube-controller-manager-embed-certs-046271" [bbdde76a-6098-496b-aaeb-2d61a714017a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 16:17:23.703227  615191 system_pods.go:89] "kube-proxy-q9pxb" [85574988-c836-4351-80bf-92683e782d91] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1202 16:17:23.703256  615191 system_pods.go:89] "kube-scheduler-embed-certs-046271" [d3b40c19-3363-443d-93f9-d2789b47d291] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 16:17:23.703268  615191 system_pods.go:89] "storage-provisioner" [5a625bd8-b8b8-4abc-b86a-d39218c7ffe3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 16:17:23.703278  615191 system_pods.go:126] duration metric: took 2.864031ms to wait for k8s-apps to be running ...
	I1202 16:17:23.703288  615191 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 16:17:23.703342  615191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 16:17:23.717127  615191 system_svc.go:56] duration metric: took 13.83377ms WaitForService to wait for kubelet
	I1202 16:17:23.717156  615191 kubeadm.go:587] duration metric: took 3.195075641s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 16:17:23.717179  615191 node_conditions.go:102] verifying NodePressure condition ...
	I1202 16:17:23.720108  615191 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 16:17:23.720130  615191 node_conditions.go:123] node cpu capacity is 8
	I1202 16:17:23.720143  615191 node_conditions.go:105] duration metric: took 2.959591ms to run NodePressure ...
	I1202 16:17:23.720159  615191 start.go:242] waiting for startup goroutines ...
	I1202 16:17:23.720169  615191 start.go:247] waiting for cluster config update ...
	I1202 16:17:23.720186  615191 start.go:256] writing updated cluster config ...
	I1202 16:17:23.720469  615191 ssh_runner.go:195] Run: rm -f paused
	I1202 16:17:23.724393  615191 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 16:17:23.728063  615191 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-f2vhx" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 16:17:25.734503  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	W1202 16:17:27.735550  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	W1202 16:17:23.807143  609654 pod_ready.go:104] pod "coredns-7d764666f9-fxl4s" is not "Ready", error: <nil>
	W1202 16:17:26.306569  609654 pod_ready.go:104] pod "coredns-7d764666f9-fxl4s" is not "Ready", error: <nil>
	W1202 16:17:28.307617  609654 pod_ready.go:104] pod "coredns-7d764666f9-fxl4s" is not "Ready", error: <nil>
	I1202 16:17:27.600994  617021 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 16:17:27.601027  617021 machine.go:97] duration metric: took 4.496275002s to provisionDockerMachine
	I1202 16:17:27.601043  617021 start.go:293] postStartSetup for "default-k8s-diff-port-806420" (driver="docker")
	I1202 16:17:27.601058  617021 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 16:17:27.601128  617021 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 16:17:27.601178  617021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:17:27.623246  617021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/default-k8s-diff-port-806420/id_rsa Username:docker}
	I1202 16:17:27.730663  617021 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 16:17:27.735877  617021 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 16:17:27.735907  617021 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 16:17:27.735918  617021 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-264555/.minikube/addons for local assets ...
	I1202 16:17:27.735966  617021 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-264555/.minikube/files for local assets ...
	I1202 16:17:27.736035  617021 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem -> 2680992.pem in /etc/ssl/certs
	I1202 16:17:27.736120  617021 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 16:17:27.745825  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem --> /etc/ssl/certs/2680992.pem (1708 bytes)
	I1202 16:17:27.768713  617021 start.go:296] duration metric: took 167.65018ms for postStartSetup
	I1202 16:17:27.768803  617021 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 16:17:27.768855  617021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:17:27.789992  617021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/default-k8s-diff-port-806420/id_rsa Username:docker}
	I1202 16:17:27.900148  617021 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 16:17:27.906371  617021 fix.go:56] duration metric: took 5.18902239s for fixHost
	I1202 16:17:27.906403  617021 start.go:83] releasing machines lock for "default-k8s-diff-port-806420", held for 5.189082645s
	I1202 16:17:27.906507  617021 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-806420
	I1202 16:17:27.929346  617021 ssh_runner.go:195] Run: cat /version.json
	I1202 16:17:27.929406  617021 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 16:17:27.929409  617021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:17:27.929492  617021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:17:27.952635  617021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/default-k8s-diff-port-806420/id_rsa Username:docker}
	I1202 16:17:27.954515  617021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/default-k8s-diff-port-806420/id_rsa Username:docker}
	I1202 16:17:28.138245  617021 ssh_runner.go:195] Run: systemctl --version
	I1202 16:17:28.147344  617021 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 16:17:28.198225  617021 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 16:17:28.204870  617021 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 16:17:28.204948  617021 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 16:17:28.216111  617021 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 16:17:28.216139  617021 start.go:496] detecting cgroup driver to use...
	I1202 16:17:28.216177  617021 detect.go:190] detected "systemd" cgroup driver on host os
	I1202 16:17:28.216233  617021 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 16:17:28.236312  617021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 16:17:28.253597  617021 docker.go:218] disabling cri-docker service (if available) ...
	I1202 16:17:28.253663  617021 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 16:17:28.274789  617021 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 16:17:28.292789  617021 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 16:17:28.400578  617021 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 16:17:28.502622  617021 docker.go:234] disabling docker service ...
	I1202 16:17:28.502709  617021 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 16:17:28.519863  617021 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 16:17:28.534627  617021 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 16:17:28.622884  617021 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 16:17:28.715766  617021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 16:17:28.728514  617021 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 16:17:28.743515  617021 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 16:17:28.743589  617021 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:28.752513  617021 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1202 16:17:28.752573  617021 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:28.761803  617021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:28.770820  617021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:28.779678  617021 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 16:17:28.788772  617021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:28.799817  617021 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:28.812207  617021 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:28.822959  617021 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 16:17:28.830615  617021 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 16:17:28.839315  617021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 16:17:28.935291  617021 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 16:17:29.312918  617021 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 16:17:29.312980  617021 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 16:17:29.316948  617021 start.go:564] Will wait 60s for crictl version
	I1202 16:17:29.316995  617021 ssh_runner.go:195] Run: which crictl
	I1202 16:17:29.320879  617021 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 16:17:29.346184  617021 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 16:17:29.346247  617021 ssh_runner.go:195] Run: crio --version
	I1202 16:17:29.374009  617021 ssh_runner.go:195] Run: crio --version
	I1202 16:17:29.405802  617021 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	W1202 16:17:27.010483  607516 pod_ready.go:104] pod "coredns-5dd5756b68-fsfh2" is not "Ready", error: <nil>
	I1202 16:17:29.009809  607516 pod_ready.go:94] pod "coredns-5dd5756b68-fsfh2" is "Ready"
	I1202 16:17:29.009836  607516 pod_ready.go:86] duration metric: took 38.00631225s for pod "coredns-5dd5756b68-fsfh2" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:29.012870  607516 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-380588" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:29.017277  607516 pod_ready.go:94] pod "etcd-old-k8s-version-380588" is "Ready"
	I1202 16:17:29.017298  607516 pod_ready.go:86] duration metric: took 4.40606ms for pod "etcd-old-k8s-version-380588" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:29.019970  607516 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-380588" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:29.023996  607516 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-380588" is "Ready"
	I1202 16:17:29.024017  607516 pod_ready.go:86] duration metric: took 4.027937ms for pod "kube-apiserver-old-k8s-version-380588" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:29.026488  607516 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-380588" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:29.207471  607516 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-380588" is "Ready"
	I1202 16:17:29.207497  607516 pod_ready.go:86] duration metric: took 180.991786ms for pod "kube-controller-manager-old-k8s-version-380588" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:29.408298  607516 pod_ready.go:83] waiting for pod "kube-proxy-jqstm" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:29.809129  607516 pod_ready.go:94] pod "kube-proxy-jqstm" is "Ready"
	I1202 16:17:29.809162  607516 pod_ready.go:86] duration metric: took 400.836367ms for pod "kube-proxy-jqstm" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:30.009989  607516 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-380588" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:30.408957  607516 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-380588" is "Ready"
	I1202 16:17:30.409044  607516 pod_ready.go:86] duration metric: took 399.025835ms for pod "kube-scheduler-old-k8s-version-380588" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:30.409070  607516 pod_ready.go:40] duration metric: took 39.411732547s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 16:17:30.482562  607516 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1202 16:17:30.484303  607516 out.go:203] 
	W1202 16:17:30.485747  607516 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1202 16:17:30.486932  607516 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1202 16:17:30.488134  607516 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-380588" cluster and "default" namespace by default
	I1202 16:17:29.407098  617021 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-806420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 16:17:29.424770  617021 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1202 16:17:29.429550  617021 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 16:17:29.439999  617021 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-806420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-806420 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 16:17:29.440104  617021 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 16:17:29.440140  617021 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 16:17:29.471019  617021 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 16:17:29.471045  617021 crio.go:433] Images already preloaded, skipping extraction
	I1202 16:17:29.471102  617021 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 16:17:29.496542  617021 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 16:17:29.496569  617021 cache_images.go:86] Images are preloaded, skipping loading
	I1202 16:17:29.496578  617021 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.2 crio true true} ...
	I1202 16:17:29.496701  617021 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-806420 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-806420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 16:17:29.496786  617021 ssh_runner.go:195] Run: crio config
	I1202 16:17:29.541566  617021 cni.go:84] Creating CNI manager for ""
	I1202 16:17:29.541586  617021 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 16:17:29.541596  617021 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 16:17:29.541616  617021 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-806420 NodeName:default-k8s-diff-port-806420 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 16:17:29.541728  617021 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-806420"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 16:17:29.541789  617021 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 16:17:29.550029  617021 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 16:17:29.550090  617021 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 16:17:29.558054  617021 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1202 16:17:29.571441  617021 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 16:17:29.584227  617021 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1202 16:17:29.597282  617021 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1202 16:17:29.601067  617021 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 16:17:29.611632  617021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 16:17:29.694704  617021 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 16:17:29.718170  617021 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420 for IP: 192.168.85.2
	I1202 16:17:29.718196  617021 certs.go:195] generating shared ca certs ...
	I1202 16:17:29.718216  617021 certs.go:227] acquiring lock for ca certs: {Name:mk039ff27816ff98157f54038cc23b17e408fc34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:17:29.718396  617021 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-264555/.minikube/ca.key
	I1202 16:17:29.718471  617021 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.key
	I1202 16:17:29.718486  617021 certs.go:257] generating profile certs ...
	I1202 16:17:29.718602  617021 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/client.key
	I1202 16:17:29.718693  617021 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/apiserver.key.20cb4091
	I1202 16:17:29.718752  617021 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/proxy-client.key
	I1202 16:17:29.718896  617021 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099.pem (1338 bytes)
	W1202 16:17:29.718940  617021 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099_empty.pem, impossibly tiny 0 bytes
	I1202 16:17:29.718953  617021 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem (1679 bytes)
	I1202 16:17:29.718990  617021 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem (1082 bytes)
	I1202 16:17:29.719023  617021 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem (1123 bytes)
	I1202 16:17:29.719054  617021 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem (1675 bytes)
	I1202 16:17:29.719109  617021 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem (1708 bytes)
	I1202 16:17:29.719924  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 16:17:29.741007  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 16:17:29.761350  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 16:17:29.780876  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 16:17:29.804308  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1202 16:17:29.825901  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 16:17:29.848908  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 16:17:29.867865  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/default-k8s-diff-port-806420/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 16:17:29.888652  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem --> /usr/share/ca-certificates/2680992.pem (1708 bytes)
	I1202 16:17:29.910779  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 16:17:29.932582  617021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099.pem --> /usr/share/ca-certificates/268099.pem (1338 bytes)
	I1202 16:17:29.956561  617021 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 16:17:29.972696  617021 ssh_runner.go:195] Run: openssl version
	I1202 16:17:29.980524  617021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2680992.pem && ln -fs /usr/share/ca-certificates/2680992.pem /etc/ssl/certs/2680992.pem"
	I1202 16:17:29.991411  617021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2680992.pem
	I1202 16:17:29.996151  617021 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 15:33 /usr/share/ca-certificates/2680992.pem
	I1202 16:17:29.996212  617021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2680992.pem
	I1202 16:17:30.050503  617021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2680992.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 16:17:30.061483  617021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 16:17:30.072491  617021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:17:30.077665  617021 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 15:16 /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:17:30.077718  617021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:17:30.129682  617021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 16:17:30.140657  617021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/268099.pem && ln -fs /usr/share/ca-certificates/268099.pem /etc/ssl/certs/268099.pem"
	I1202 16:17:30.152273  617021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/268099.pem
	I1202 16:17:30.157239  617021 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 15:33 /usr/share/ca-certificates/268099.pem
	I1202 16:17:30.157304  617021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/268099.pem
	I1202 16:17:30.211554  617021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/268099.pem /etc/ssl/certs/51391683.0"
	I1202 16:17:30.223094  617021 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 16:17:30.228304  617021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 16:17:30.285622  617021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 16:17:30.343619  617021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 16:17:30.405618  617021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 16:17:30.470279  617021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 16:17:30.533815  617021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 16:17:30.599554  617021 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-806420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-806420 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 16:17:30.599678  617021 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 16:17:30.599735  617021 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 16:17:30.654880  617021 cri.go:89] found id: "dd7adc25ca0d8fd13c03d582eb1846e44e7ca31363dd13737dfcd8541ae71f4a"
	I1202 16:17:30.654952  617021 cri.go:89] found id: "85a4f9f063a689e0c01b71338ce33ac27c1c4ef5a601031762f5f6f8468c7949"
	I1202 16:17:30.654958  617021 cri.go:89] found id: "fa204ce25b4b750a274bec528d833933338cbebe536dd59bd13e8ef6cec0cb00"
	I1202 16:17:30.654963  617021 cri.go:89] found id: "e986fe28a3e21e60cd56299b5d31eb8159c847908a86b5e9049cff20903959aa"
	I1202 16:17:30.654967  617021 cri.go:89] found id: ""
	I1202 16:17:30.655019  617021 ssh_runner.go:195] Run: sudo runc list -f json
	W1202 16:17:30.673871  617021 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T16:17:30Z" level=error msg="open /run/runc: no such file or directory"
	I1202 16:17:30.673941  617021 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 16:17:30.686769  617021 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 16:17:30.686797  617021 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 16:17:30.686844  617021 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 16:17:30.699192  617021 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 16:17:30.701520  617021 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-806420" does not appear in /home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 16:17:30.702957  617021 kubeconfig.go:62] /home/jenkins/minikube-integration/22021-264555/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-806420" cluster setting kubeconfig missing "default-k8s-diff-port-806420" context setting]
	I1202 16:17:30.704478  617021 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/kubeconfig: {Name:mk809d3f43352510256b48d000241cc8ee13f80d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:17:30.707218  617021 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 16:17:30.719927  617021 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1202 16:17:30.720026  617021 kubeadm.go:602] duration metric: took 33.222622ms to restartPrimaryControlPlane
	I1202 16:17:30.720048  617021 kubeadm.go:403] duration metric: took 120.509203ms to StartCluster
	I1202 16:17:30.720091  617021 settings.go:142] acquiring lock: {Name:mkb00b5395affa5a80ee09f21cfed53b1afcd59c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:17:30.720179  617021 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 16:17:30.723308  617021 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/kubeconfig: {Name:mk809d3f43352510256b48d000241cc8ee13f80d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:17:30.723718  617021 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 16:17:30.724045  617021 config.go:182] Loaded profile config "default-k8s-diff-port-806420": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 16:17:30.724081  617021 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 16:17:30.724157  617021 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-806420"
	I1202 16:17:30.724174  617021 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-806420"
	W1202 16:17:30.724182  617021 addons.go:248] addon storage-provisioner should already be in state true
	I1202 16:17:30.724203  617021 host.go:66] Checking if "default-k8s-diff-port-806420" exists ...
	I1202 16:17:30.724727  617021 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-806420 --format={{.State.Status}}
	I1202 16:17:30.724888  617021 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-806420"
	I1202 16:17:30.724906  617021 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-806420"
	W1202 16:17:30.724915  617021 addons.go:248] addon dashboard should already be in state true
	I1202 16:17:30.724912  617021 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-806420"
	I1202 16:17:30.724939  617021 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-806420"
	I1202 16:17:30.724944  617021 host.go:66] Checking if "default-k8s-diff-port-806420" exists ...
	I1202 16:17:30.725432  617021 out.go:179] * Verifying Kubernetes components...
	I1202 16:17:30.725507  617021 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-806420 --format={{.State.Status}}
	I1202 16:17:30.725453  617021 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-806420 --format={{.State.Status}}
	I1202 16:17:30.730554  617021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 16:17:30.764253  617021 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 16:17:30.765559  617021 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1202 16:17:30.765563  617021 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 16:17:30.765773  617021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 16:17:30.765913  617021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:17:30.771476  617021 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1202 16:17:30.772748  617021 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1202 16:17:30.772772  617021 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1202 16:17:30.772833  617021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:17:30.774089  617021 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-806420"
	W1202 16:17:30.774153  617021 addons.go:248] addon default-storageclass should already be in state true
	I1202 16:17:30.774196  617021 host.go:66] Checking if "default-k8s-diff-port-806420" exists ...
	I1202 16:17:30.774739  617021 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-806420 --format={{.State.Status}}
	I1202 16:17:30.805290  617021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/default-k8s-diff-port-806420/id_rsa Username:docker}
	I1202 16:17:30.815719  617021 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 16:17:30.815744  617021 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 16:17:30.815803  617021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:17:30.818534  617021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/default-k8s-diff-port-806420/id_rsa Username:docker}
	I1202 16:17:30.847757  617021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/default-k8s-diff-port-806420/id_rsa Username:docker}
	I1202 16:17:30.983053  617021 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 16:17:31.006025  617021 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-806420" to be "Ready" ...
	I1202 16:17:31.015709  617021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 16:17:31.044129  617021 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1202 16:17:31.044161  617021 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1202 16:17:31.080007  617021 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1202 16:17:31.080035  617021 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1202 16:17:31.089152  617021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 16:17:31.105968  617021 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1202 16:17:31.105999  617021 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1202 16:17:31.125794  617021 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1202 16:17:31.125819  617021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1202 16:17:31.146432  617021 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1202 16:17:31.146461  617021 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1202 16:17:31.166977  617021 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1202 16:17:31.167010  617021 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1202 16:17:31.185493  617021 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1202 16:17:31.185536  617021 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1202 16:17:31.204002  617021 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1202 16:17:31.204034  617021 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1202 16:17:31.223408  617021 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 16:17:31.223455  617021 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1202 16:17:31.243155  617021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 16:17:30.312353  609654 pod_ready.go:104] pod "coredns-7d764666f9-fxl4s" is not "Ready", error: <nil>
	I1202 16:17:31.311117  609654 pod_ready.go:94] pod "coredns-7d764666f9-fxl4s" is "Ready"
	I1202 16:17:31.311148  609654 pod_ready.go:86] duration metric: took 32.51010024s for pod "coredns-7d764666f9-fxl4s" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:31.314691  609654 pod_ready.go:83] waiting for pod "etcd-no-preload-534842" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:31.321620  609654 pod_ready.go:94] pod "etcd-no-preload-534842" is "Ready"
	I1202 16:17:31.321651  609654 pod_ready.go:86] duration metric: took 6.872089ms for pod "etcd-no-preload-534842" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:31.324914  609654 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-534842" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:31.330629  609654 pod_ready.go:94] pod "kube-apiserver-no-preload-534842" is "Ready"
	I1202 16:17:31.330663  609654 pod_ready.go:86] duration metric: took 5.720105ms for pod "kube-apiserver-no-preload-534842" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:31.333806  609654 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-534842" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:31.505747  609654 pod_ready.go:94] pod "kube-controller-manager-no-preload-534842" is "Ready"
	I1202 16:17:31.505784  609654 pod_ready.go:86] duration metric: took 171.955168ms for pod "kube-controller-manager-no-preload-534842" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:31.705911  609654 pod_ready.go:83] waiting for pod "kube-proxy-xqnrx" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:32.105456  609654 pod_ready.go:94] pod "kube-proxy-xqnrx" is "Ready"
	I1202 16:17:32.105487  609654 pod_ready.go:86] duration metric: took 399.544466ms for pod "kube-proxy-xqnrx" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:32.306457  609654 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-534842" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:32.705260  609654 pod_ready.go:94] pod "kube-scheduler-no-preload-534842" is "Ready"
	I1202 16:17:32.705298  609654 pod_ready.go:86] duration metric: took 398.794846ms for pod "kube-scheduler-no-preload-534842" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:17:32.705317  609654 pod_ready.go:40] duration metric: took 33.908136514s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 16:17:32.783728  609654 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1202 16:17:32.787599  609654 out.go:179] * Done! kubectl is now configured to use "no-preload-534842" cluster and "default" namespace by default
	W1202 16:17:30.238599  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	W1202 16:17:32.744223  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	I1202 16:17:32.896889  617021 node_ready.go:49] node "default-k8s-diff-port-806420" is "Ready"
	I1202 16:17:32.896991  617021 node_ready.go:38] duration metric: took 1.890924168s for node "default-k8s-diff-port-806420" to be "Ready" ...
	I1202 16:17:32.897022  617021 api_server.go:52] waiting for apiserver process to appear ...
	I1202 16:17:32.897106  617021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 16:17:33.630628  617021 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.614880216s)
	I1202 16:17:33.630702  617021 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.541520167s)
	I1202 16:17:33.630841  617021 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.387645959s)
	I1202 16:17:33.630867  617021 api_server.go:72] duration metric: took 2.907113913s to wait for apiserver process to appear ...
	I1202 16:17:33.630880  617021 api_server.go:88] waiting for apiserver healthz status ...
	I1202 16:17:33.630901  617021 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1202 16:17:33.633116  617021 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-806420 addons enable metrics-server
	
	I1202 16:17:33.635678  617021 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 16:17:33.635702  617021 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 16:17:33.639966  617021 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1202 16:17:33.641004  617021 addons.go:530] duration metric: took 2.916912715s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1202 16:17:34.131947  617021 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1202 16:17:34.137470  617021 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1202 16:17:34.138943  617021 api_server.go:141] control plane version: v1.34.2
	I1202 16:17:34.139019  617021 api_server.go:131] duration metric: took 508.129517ms to wait for apiserver health ...
	I1202 16:17:34.139043  617021 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 16:17:34.144346  617021 system_pods.go:59] 8 kube-system pods found
	I1202 16:17:34.144412  617021 system_pods.go:61] "coredns-66bc5c9577-6h6nr" [7c832d8c-99dc-4663-a386-c48abaf9335e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 16:17:34.144438  617021 system_pods.go:61] "etcd-default-k8s-diff-port-806420" [e47c28bd-c4ac-417c-92e4-2ed52662c35b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 16:17:34.144453  617021 system_pods.go:61] "kindnet-pc8st" [17b96563-2832-47ee-9d04-8e27db1a3c6b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1202 16:17:34.144461  617021 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-806420" [44c28fe6-dea2-4f64-989d-d69480bc7988] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 16:17:34.144472  617021 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-806420" [6e6342da-debb-4021-8cb1-adec092a866a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 16:17:34.144482  617021 system_pods.go:61] "kube-proxy-574km" [3766b4e1-7e00-4229-99a3-9eec486a3437] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1202 16:17:34.144495  617021 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-806420" [14951142-9cb5-4cf8-a095-d45123ec49da] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 16:17:34.144502  617021 system_pods.go:61] "storage-provisioner" [b3d4301c-a3b1-4c90-bb80-045b48b75011] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 16:17:34.144515  617021 system_pods.go:74] duration metric: took 5.454658ms to wait for pod list to return data ...
	I1202 16:17:34.144526  617021 default_sa.go:34] waiting for default service account to be created ...
	I1202 16:17:34.147568  617021 default_sa.go:45] found service account: "default"
	I1202 16:17:34.147593  617021 default_sa.go:55] duration metric: took 3.053699ms for default service account to be created ...
	I1202 16:17:34.147604  617021 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 16:17:34.151209  617021 system_pods.go:86] 8 kube-system pods found
	I1202 16:17:34.151246  617021 system_pods.go:89] "coredns-66bc5c9577-6h6nr" [7c832d8c-99dc-4663-a386-c48abaf9335e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 16:17:34.151258  617021 system_pods.go:89] "etcd-default-k8s-diff-port-806420" [e47c28bd-c4ac-417c-92e4-2ed52662c35b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 16:17:34.151270  617021 system_pods.go:89] "kindnet-pc8st" [17b96563-2832-47ee-9d04-8e27db1a3c6b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1202 16:17:34.151280  617021 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-806420" [44c28fe6-dea2-4f64-989d-d69480bc7988] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 16:17:34.151291  617021 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-806420" [6e6342da-debb-4021-8cb1-adec092a866a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 16:17:34.151299  617021 system_pods.go:89] "kube-proxy-574km" [3766b4e1-7e00-4229-99a3-9eec486a3437] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1202 16:17:34.151307  617021 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-806420" [14951142-9cb5-4cf8-a095-d45123ec49da] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 16:17:34.151315  617021 system_pods.go:89] "storage-provisioner" [b3d4301c-a3b1-4c90-bb80-045b48b75011] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 16:17:34.151325  617021 system_pods.go:126] duration metric: took 3.713746ms to wait for k8s-apps to be running ...
	I1202 16:17:34.151335  617021 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 16:17:34.151394  617021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 16:17:34.170938  617021 system_svc.go:56] duration metric: took 19.587588ms WaitForService to wait for kubelet
	I1202 16:17:34.170990  617021 kubeadm.go:587] duration metric: took 3.447228899s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 16:17:34.171017  617021 node_conditions.go:102] verifying NodePressure condition ...
	I1202 16:17:34.176230  617021 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 16:17:34.176264  617021 node_conditions.go:123] node cpu capacity is 8
	I1202 16:17:34.176284  617021 node_conditions.go:105] duration metric: took 5.260608ms to run NodePressure ...
	I1202 16:17:34.176300  617021 start.go:242] waiting for startup goroutines ...
	I1202 16:17:34.176309  617021 start.go:247] waiting for cluster config update ...
	I1202 16:17:34.176324  617021 start.go:256] writing updated cluster config ...
	I1202 16:17:34.176722  617021 ssh_runner.go:195] Run: rm -f paused
	I1202 16:17:34.181758  617021 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 16:17:34.185626  617021 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6h6nr" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 16:17:36.191101  617021 pod_ready.go:104] pod "coredns-66bc5c9577-6h6nr" is not "Ready", error: <nil>
	W1202 16:17:35.233349  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	W1202 16:17:37.234098  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	W1202 16:17:38.191695  617021 pod_ready.go:104] pod "coredns-66bc5c9577-6h6nr" is not "Ready", error: <nil>
	W1202 16:17:40.691815  617021 pod_ready.go:104] pod "coredns-66bc5c9577-6h6nr" is not "Ready", error: <nil>
	W1202 16:17:39.234621  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	W1202 16:17:41.734966  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	W1202 16:17:42.693258  617021 pod_ready.go:104] pod "coredns-66bc5c9577-6h6nr" is not "Ready", error: <nil>
	W1202 16:17:45.191908  617021 pod_ready.go:104] pod "coredns-66bc5c9577-6h6nr" is not "Ready", error: <nil>
	W1202 16:17:47.192345  617021 pod_ready.go:104] pod "coredns-66bc5c9577-6h6nr" is not "Ready", error: <nil>
	W1202 16:17:44.233558  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	W1202 16:17:46.234006  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 02 16:17:09 no-preload-534842 crio[562]: time="2025-12-02T16:17:09.434185977Z" level=info msg="Started container" PID=1745 containerID=ff03d8794fed2985d00df11059baabf61951e5d8c86cf6e9f5ad7e6e8760bd6d description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nvld6/dashboard-metrics-scraper id=3c151391-a41d-4af3-9cf2-a836416ba487 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e43a64505959da73cd8225cb013cd04884475f35991d0d2d47d20d06ab67328e
	Dec 02 16:17:10 no-preload-534842 crio[562]: time="2025-12-02T16:17:10.374771815Z" level=info msg="Removing container: ad458d08c2f22d674445230559d2036ccc3122e74daf745f83fd436c3110a701" id=cbdd74ec-9304-41da-ba61-ca54bbd90ffa name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 16:17:10 no-preload-534842 crio[562]: time="2025-12-02T16:17:10.385578602Z" level=info msg="Removed container ad458d08c2f22d674445230559d2036ccc3122e74daf745f83fd436c3110a701: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nvld6/dashboard-metrics-scraper" id=cbdd74ec-9304-41da-ba61-ca54bbd90ffa name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 16:17:20 no-preload-534842 crio[562]: time="2025-12-02T16:17:20.294128225Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=6e5bb814-92e9-446f-8eef-c6c54cd09088 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:17:20 no-preload-534842 crio[562]: time="2025-12-02T16:17:20.29736234Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a7704cf6-1cc3-4527-bfe4-bc5f46c61ae7 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:17:20 no-preload-534842 crio[562]: time="2025-12-02T16:17:20.300721718Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nvld6/dashboard-metrics-scraper" id=4bf66704-04d1-4bba-bd12-ce8c37830af0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:17:20 no-preload-534842 crio[562]: time="2025-12-02T16:17:20.300869531Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:17:20 no-preload-534842 crio[562]: time="2025-12-02T16:17:20.309769346Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:17:20 no-preload-534842 crio[562]: time="2025-12-02T16:17:20.31043557Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:17:20 no-preload-534842 crio[562]: time="2025-12-02T16:17:20.337170001Z" level=info msg="Created container 3b9195698fbf49b96486aaf6a6ca745c7778e0865f5cd999c2b76324299e3afc: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nvld6/dashboard-metrics-scraper" id=4bf66704-04d1-4bba-bd12-ce8c37830af0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:17:20 no-preload-534842 crio[562]: time="2025-12-02T16:17:20.338756276Z" level=info msg="Starting container: 3b9195698fbf49b96486aaf6a6ca745c7778e0865f5cd999c2b76324299e3afc" id=6995bc99-a0e4-461d-a4ca-3a783e27cc32 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 16:17:20 no-preload-534842 crio[562]: time="2025-12-02T16:17:20.341525933Z" level=info msg="Started container" PID=1756 containerID=3b9195698fbf49b96486aaf6a6ca745c7778e0865f5cd999c2b76324299e3afc description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nvld6/dashboard-metrics-scraper id=6995bc99-a0e4-461d-a4ca-3a783e27cc32 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e43a64505959da73cd8225cb013cd04884475f35991d0d2d47d20d06ab67328e
	Dec 02 16:17:20 no-preload-534842 crio[562]: time="2025-12-02T16:17:20.404969976Z" level=info msg="Removing container: ff03d8794fed2985d00df11059baabf61951e5d8c86cf6e9f5ad7e6e8760bd6d" id=fbec879d-bca7-4367-84ea-be7a60007f83 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 16:17:20 no-preload-534842 crio[562]: time="2025-12-02T16:17:20.418031673Z" level=info msg="Removed container ff03d8794fed2985d00df11059baabf61951e5d8c86cf6e9f5ad7e6e8760bd6d: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nvld6/dashboard-metrics-scraper" id=fbec879d-bca7-4367-84ea-be7a60007f83 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 16:17:44 no-preload-534842 crio[562]: time="2025-12-02T16:17:44.293819471Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e78b4706-3d7f-463e-8854-bdd3120f035d name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:17:44 no-preload-534842 crio[562]: time="2025-12-02T16:17:44.294856865Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2c278448-3ab4-4bad-a462-0d80cb683ae6 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:17:44 no-preload-534842 crio[562]: time="2025-12-02T16:17:44.29607642Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nvld6/dashboard-metrics-scraper" id=ef5e5fed-0a9d-444d-8440-6fce7024709f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:17:44 no-preload-534842 crio[562]: time="2025-12-02T16:17:44.296225514Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:17:44 no-preload-534842 crio[562]: time="2025-12-02T16:17:44.302525022Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:17:44 no-preload-534842 crio[562]: time="2025-12-02T16:17:44.302974074Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:17:44 no-preload-534842 crio[562]: time="2025-12-02T16:17:44.346689592Z" level=info msg="Created container 678df9e701579d2c9bec8e97da27418a40aef0f064af7456732c5cf2c76aafa6: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nvld6/dashboard-metrics-scraper" id=ef5e5fed-0a9d-444d-8440-6fce7024709f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:17:44 no-preload-534842 crio[562]: time="2025-12-02T16:17:44.347438753Z" level=info msg="Starting container: 678df9e701579d2c9bec8e97da27418a40aef0f064af7456732c5cf2c76aafa6" id=970afce4-f53f-4e46-8668-aa89e615cb7f name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 16:17:44 no-preload-534842 crio[562]: time="2025-12-02T16:17:44.349379317Z" level=info msg="Started container" PID=1788 containerID=678df9e701579d2c9bec8e97da27418a40aef0f064af7456732c5cf2c76aafa6 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nvld6/dashboard-metrics-scraper id=970afce4-f53f-4e46-8668-aa89e615cb7f name=/runtime.v1.RuntimeService/StartContainer sandboxID=e43a64505959da73cd8225cb013cd04884475f35991d0d2d47d20d06ab67328e
	Dec 02 16:17:44 no-preload-534842 crio[562]: time="2025-12-02T16:17:44.469316308Z" level=info msg="Removing container: 3b9195698fbf49b96486aaf6a6ca745c7778e0865f5cd999c2b76324299e3afc" id=c70c00b8-4616-4d07-bf15-e9e3affd14c6 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 16:17:44 no-preload-534842 crio[562]: time="2025-12-02T16:17:44.479349023Z" level=info msg="Removed container 3b9195698fbf49b96486aaf6a6ca745c7778e0865f5cd999c2b76324299e3afc: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nvld6/dashboard-metrics-scraper" id=c70c00b8-4616-4d07-bf15-e9e3affd14c6 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	678df9e701579       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           5 seconds ago       Exited              dashboard-metrics-scraper   3                   e43a64505959d       dashboard-metrics-scraper-867fb5f87b-nvld6   kubernetes-dashboard
	4acc4581c2377       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   44 seconds ago      Running             kubernetes-dashboard        0                   cfcaf6339fd8f       kubernetes-dashboard-b84665fb8-6hz4c         kubernetes-dashboard
	784e9d9349278       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Running             storage-provisioner         1                   6736aa3ef6085       storage-provisioner                          kube-system
	cf43888cedff5       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           52 seconds ago      Running             coredns                     0                   06aff50ff5234       coredns-7d764666f9-fxl4s                     kube-system
	ac9caed37c1bb       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   560edc2d221d0       busybox                                      default
	d11384487d38d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   6736aa3ef6085       storage-provisioner                          kube-system
	4c8eb7538dccf       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   0de2248931a42       kindnet-fn84j                                kube-system
	ce137b34f41fe       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                           52 seconds ago      Running             kube-proxy                  0                   4d936749ce9ef       kube-proxy-xqnrx                             kube-system
	ef4d71f3dba7f       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           54 seconds ago      Running             etcd                        0                   69d12d8b1c5cc       etcd-no-preload-534842                       kube-system
	7f5c2cae2aa29       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                           54 seconds ago      Running             kube-apiserver              0                   66f0e30c31f25       kube-apiserver-no-preload-534842             kube-system
	44a6ec8649ccb       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                           54 seconds ago      Running             kube-controller-manager     0                   f4428ac81d74e       kube-controller-manager-no-preload-534842    kube-system
	ec6d57760ee61       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                           54 seconds ago      Running             kube-scheduler              0                   a10070b9967b6       kube-scheduler-no-preload-534842             kube-system
	
	
	==> coredns [cf43888cedff5c122573841043f9faaa886459652a505ba34085fc2cdb3a7d64] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:54621 - 17492 "HINFO IN 7032501483970489343.8395598587500903268. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.020672712s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-534842
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-534842
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=no-preload-534842
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T16_15_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 16:15:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-534842
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 16:17:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 16:17:27 +0000   Tue, 02 Dec 2025 16:15:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 16:17:27 +0000   Tue, 02 Dec 2025 16:15:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 16:17:27 +0000   Tue, 02 Dec 2025 16:15:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 16:17:27 +0000   Tue, 02 Dec 2025 16:16:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-534842
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                08e82a9a-8bf2-46c3-bfb2-1095025d0bbb
	  Boot ID:                    e00bac56-b076-4861-bc22-5d3b11269f73
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-7d764666f9-fxl4s                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-no-preload-534842                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         114s
	  kube-system                 kindnet-fn84j                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-no-preload-534842              250m (3%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-no-preload-534842     200m (2%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-xqnrx                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-no-preload-534842              100m (1%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-nvld6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-6hz4c          0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  110s  node-controller  Node no-preload-534842 event: Registered Node no-preload-534842 in Controller
	  Normal  RegisteredNode  50s   node-controller  Node no-preload-534842 event: Registered Node no-preload-534842 in Controller
	
	
	==> dmesg <==
	[  +0.000023] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[Dec 2 16:14] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ca bc 15 8e 4f 39 08 06
	[  +0.202375] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4a 25 86 21 45 76 08 06
	[  +7.441346] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 50 97 74 77 f9 08 06
	[  +0.000311] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 8c 8a 4d de f7 08 06
	[Dec 2 16:15] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 87 56 d2 46 1b 08 06
	[  +0.000909] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4a 25 86 21 45 76 08 06
	[  +7.449328] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a 06 ef 04 0a 22 08 06
	[ +17.731920] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff ae 8e 5c 48 83 60 08 06
	[  +2.165442] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0e 0b db fb 54 af 08 06
	[  +0.000320] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 3a 06 ef 04 0a 22 08 06
	[ +14.651928] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 5d 2d 15 78 ec 08 06
	[  +0.000385] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 8e 5c 48 83 60 08 06
	
	
	==> etcd [ef4d71f3dba7f249c2dccfb9492705acceca27d92b988ad3f3be8ddf967a2524] <==
	{"level":"warn","ts":"2025-12-02T16:16:56.491476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:56.497910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:56.504457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:56.513757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:56.520218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:56.526592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:56.533632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:56.540362Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:56.552535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:56.558754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:56.565475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:56.577633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:56.585898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:56.592949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:56.599678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:56.607195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:56.615093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:56.622691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:56.629937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:56.637201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:56.644914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:56.651519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:56.672749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:56.680175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:16:56.687378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52662","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 16:17:50 up  3:00,  0 user,  load average: 4.47, 4.19, 2.73
	Linux no-preload-534842 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4c8eb7538dccf291c0dade54352e7e1daff8f787ed7c19748a63f7a9d724cc04] <==
	I1202 16:16:57.803390       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 16:16:57.895919       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1202 16:16:57.896108       1 main.go:148] setting mtu 1500 for CNI 
	I1202 16:16:57.896126       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 16:16:57.896144       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T16:16:58Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 16:16:58.099008       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 16:16:58.099511       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 16:16:58.099572       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 16:16:58.099744       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1202 16:16:58.700340       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 16:16:58.700365       1 metrics.go:72] Registering metrics
	I1202 16:16:58.700405       1 controller.go:711] "Syncing nftables rules"
	I1202 16:17:08.099860       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1202 16:17:08.099916       1 main.go:301] handling current node
	I1202 16:17:18.100498       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1202 16:17:18.100561       1 main.go:301] handling current node
	I1202 16:17:28.099630       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1202 16:17:28.099682       1 main.go:301] handling current node
	I1202 16:17:38.105496       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1202 16:17:38.105533       1 main.go:301] handling current node
	I1202 16:17:48.101515       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1202 16:17:48.101565       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7f5c2cae2aa291edcbbe0f927b622ca7853d0323468ef1d4662a47fc47dab2a7] <==
	I1202 16:16:57.227852       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1202 16:16:57.227471       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1202 16:16:57.227881       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1202 16:16:57.227925       1 shared_informer.go:377] "Caches are synced"
	I1202 16:16:57.228029       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1202 16:16:57.228060       1 aggregator.go:187] initial CRD sync complete...
	I1202 16:16:57.228068       1 autoregister_controller.go:144] Starting autoregister controller
	I1202 16:16:57.228072       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1202 16:16:57.228075       1 cache.go:39] Caches are synced for autoregister controller
	I1202 16:16:57.228258       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1202 16:16:57.228480       1 shared_informer.go:377] "Caches are synced"
	I1202 16:16:57.233116       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1202 16:16:57.234796       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1202 16:16:57.247308       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 16:16:57.261412       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 16:16:57.517936       1 controller.go:667] quota admission added evaluator for: namespaces
	I1202 16:16:57.548596       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1202 16:16:57.576701       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 16:16:57.586524       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 16:16:57.659143       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.151.234"}
	I1202 16:16:57.678669       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.152.43"}
	I1202 16:16:58.130910       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1202 16:17:00.796001       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 16:17:00.896502       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 16:17:00.998791       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [44a6ec8649ccbb15298488aba888279a5c30ed43f97b8e65953b50f4199a5f54] <==
	I1202 16:17:00.351915       1 shared_informer.go:377] "Caches are synced"
	I1202 16:17:00.352060       1 shared_informer.go:377] "Caches are synced"
	I1202 16:17:00.352256       1 shared_informer.go:377] "Caches are synced"
	I1202 16:17:00.352338       1 shared_informer.go:377] "Caches are synced"
	I1202 16:17:00.352372       1 shared_informer.go:377] "Caches are synced"
	I1202 16:17:00.352496       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1202 16:17:00.352582       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-534842"
	I1202 16:17:00.352606       1 shared_informer.go:377] "Caches are synced"
	I1202 16:17:00.352739       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1202 16:17:00.352844       1 shared_informer.go:377] "Caches are synced"
	I1202 16:17:00.352859       1 shared_informer.go:377] "Caches are synced"
	I1202 16:17:00.353101       1 shared_informer.go:377] "Caches are synced"
	I1202 16:17:00.352879       1 shared_informer.go:377] "Caches are synced"
	I1202 16:17:00.352895       1 shared_informer.go:377] "Caches are synced"
	I1202 16:17:00.353287       1 shared_informer.go:377] "Caches are synced"
	I1202 16:17:00.355816       1 shared_informer.go:377] "Caches are synced"
	I1202 16:17:00.355861       1 shared_informer.go:377] "Caches are synced"
	I1202 16:17:00.356184       1 shared_informer.go:377] "Caches are synced"
	I1202 16:17:00.356733       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 16:17:00.357020       1 shared_informer.go:377] "Caches are synced"
	I1202 16:17:00.361819       1 shared_informer.go:377] "Caches are synced"
	I1202 16:17:00.457140       1 shared_informer.go:377] "Caches are synced"
	I1202 16:17:00.457163       1 shared_informer.go:377] "Caches are synced"
	I1202 16:17:00.457178       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1202 16:17:00.457185       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [ce137b34f41fe8fd3b9b895d8913ee21b506dd0abb93c65e3d35f67ee4dbad78] <==
	I1202 16:16:57.702526       1 server_linux.go:53] "Using iptables proxy"
	I1202 16:16:57.771756       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 16:16:57.872824       1 shared_informer.go:377] "Caches are synced"
	I1202 16:16:57.872867       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1202 16:16:57.872979       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 16:16:57.892224       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 16:16:57.892275       1 server_linux.go:136] "Using iptables Proxier"
	I1202 16:16:57.897414       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 16:16:57.897830       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1202 16:16:57.897894       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 16:16:57.899412       1 config.go:200] "Starting service config controller"
	I1202 16:16:57.899654       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 16:16:57.899491       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 16:16:57.899685       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 16:16:57.899491       1 config.go:106] "Starting endpoint slice config controller"
	I1202 16:16:57.899698       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 16:16:57.899861       1 config.go:309] "Starting node config controller"
	I1202 16:16:57.899961       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 16:16:57.899988       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 16:16:57.999855       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 16:16:57.999858       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 16:16:57.999891       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ec6d57760ee61c8da2007c23b76750466cdaa245ef7a003ac8ccc74510f7bd2e] <==
	I1202 16:16:56.202453       1 serving.go:386] Generated self-signed cert in-memory
	W1202 16:16:57.153937       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1202 16:16:57.153975       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1202 16:16:57.153988       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1202 16:16:57.153997       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1202 16:16:57.183747       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1202 16:16:57.183794       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 16:16:57.186712       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 16:16:57.186764       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 16:16:57.186942       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1202 16:16:57.187061       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1202 16:16:57.287132       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 02 16:17:15 no-preload-534842 kubelet[714]: E1202 16:17:15.387261     714 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-534842" containerName="etcd"
	Dec 02 16:17:15 no-preload-534842 kubelet[714]: E1202 16:17:15.387390     714 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-534842" containerName="kube-scheduler"
	Dec 02 16:17:18 no-preload-534842 kubelet[714]: E1202 16:17:18.318200     714 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nvld6" containerName="dashboard-metrics-scraper"
	Dec 02 16:17:18 no-preload-534842 kubelet[714]: I1202 16:17:18.318237     714 scope.go:122] "RemoveContainer" containerID="ff03d8794fed2985d00df11059baabf61951e5d8c86cf6e9f5ad7e6e8760bd6d"
	Dec 02 16:17:18 no-preload-534842 kubelet[714]: E1202 16:17:18.318397     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-nvld6_kubernetes-dashboard(a4a63e16-a516-47d2-8bee-ed321517b392)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nvld6" podUID="a4a63e16-a516-47d2-8bee-ed321517b392"
	Dec 02 16:17:20 no-preload-534842 kubelet[714]: E1202 16:17:20.293333     714 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nvld6" containerName="dashboard-metrics-scraper"
	Dec 02 16:17:20 no-preload-534842 kubelet[714]: I1202 16:17:20.293385     714 scope.go:122] "RemoveContainer" containerID="ff03d8794fed2985d00df11059baabf61951e5d8c86cf6e9f5ad7e6e8760bd6d"
	Dec 02 16:17:20 no-preload-534842 kubelet[714]: I1202 16:17:20.402343     714 scope.go:122] "RemoveContainer" containerID="ff03d8794fed2985d00df11059baabf61951e5d8c86cf6e9f5ad7e6e8760bd6d"
	Dec 02 16:17:20 no-preload-534842 kubelet[714]: E1202 16:17:20.402480     714 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nvld6" containerName="dashboard-metrics-scraper"
	Dec 02 16:17:20 no-preload-534842 kubelet[714]: I1202 16:17:20.402502     714 scope.go:122] "RemoveContainer" containerID="3b9195698fbf49b96486aaf6a6ca745c7778e0865f5cd999c2b76324299e3afc"
	Dec 02 16:17:20 no-preload-534842 kubelet[714]: E1202 16:17:20.402708     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-nvld6_kubernetes-dashboard(a4a63e16-a516-47d2-8bee-ed321517b392)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nvld6" podUID="a4a63e16-a516-47d2-8bee-ed321517b392"
	Dec 02 16:17:28 no-preload-534842 kubelet[714]: E1202 16:17:28.317669     714 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nvld6" containerName="dashboard-metrics-scraper"
	Dec 02 16:17:28 no-preload-534842 kubelet[714]: I1202 16:17:28.317721     714 scope.go:122] "RemoveContainer" containerID="3b9195698fbf49b96486aaf6a6ca745c7778e0865f5cd999c2b76324299e3afc"
	Dec 02 16:17:28 no-preload-534842 kubelet[714]: E1202 16:17:28.317967     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-nvld6_kubernetes-dashboard(a4a63e16-a516-47d2-8bee-ed321517b392)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nvld6" podUID="a4a63e16-a516-47d2-8bee-ed321517b392"
	Dec 02 16:17:30 no-preload-534842 kubelet[714]: E1202 16:17:30.863329     714 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-fxl4s" containerName="coredns"
	Dec 02 16:17:44 no-preload-534842 kubelet[714]: E1202 16:17:44.293185     714 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nvld6" containerName="dashboard-metrics-scraper"
	Dec 02 16:17:44 no-preload-534842 kubelet[714]: I1202 16:17:44.293225     714 scope.go:122] "RemoveContainer" containerID="3b9195698fbf49b96486aaf6a6ca745c7778e0865f5cd999c2b76324299e3afc"
	Dec 02 16:17:44 no-preload-534842 kubelet[714]: I1202 16:17:44.467904     714 scope.go:122] "RemoveContainer" containerID="3b9195698fbf49b96486aaf6a6ca745c7778e0865f5cd999c2b76324299e3afc"
	Dec 02 16:17:44 no-preload-534842 kubelet[714]: E1202 16:17:44.468110     714 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nvld6" containerName="dashboard-metrics-scraper"
	Dec 02 16:17:44 no-preload-534842 kubelet[714]: I1202 16:17:44.468147     714 scope.go:122] "RemoveContainer" containerID="678df9e701579d2c9bec8e97da27418a40aef0f064af7456732c5cf2c76aafa6"
	Dec 02 16:17:44 no-preload-534842 kubelet[714]: E1202 16:17:44.468346     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-nvld6_kubernetes-dashboard(a4a63e16-a516-47d2-8bee-ed321517b392)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nvld6" podUID="a4a63e16-a516-47d2-8bee-ed321517b392"
	Dec 02 16:17:45 no-preload-534842 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 02 16:17:45 no-preload-534842 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 02 16:17:45 no-preload-534842 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 16:17:45 no-preload-534842 systemd[1]: kubelet.service: Consumed 1.712s CPU time.
	
	
	==> kubernetes-dashboard [4acc4581c23774d9b9ae826d1cebbf7a4ab0f3eb613cad13a717ce4d3ceb6947] <==
	2025/12/02 16:17:05 Using namespace: kubernetes-dashboard
	2025/12/02 16:17:05 Using in-cluster config to connect to apiserver
	2025/12/02 16:17:05 Using secret token for csrf signing
	2025/12/02 16:17:05 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/02 16:17:05 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/02 16:17:05 Successful initial request to the apiserver, version: v1.35.0-beta.0
	2025/12/02 16:17:05 Generating JWE encryption key
	2025/12/02 16:17:05 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/02 16:17:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/02 16:17:06 Initializing JWE encryption key from synchronized object
	2025/12/02 16:17:06 Creating in-cluster Sidecar client
	2025/12/02 16:17:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/02 16:17:06 Serving insecurely on HTTP port: 9090
	2025/12/02 16:17:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/02 16:17:05 Starting overwatch
	
	
	==> storage-provisioner [784e9d934927898b20c9e43c22133906438a1575abb416ae016ebfe0b2444f19] <==
	W1202 16:17:25.836945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:27.841229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:27.847895       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:29.852191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:29.856592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:31.862417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:31.869846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:33.873541       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:33.877533       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:35.881062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:35.884684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:37.888301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:37.894825       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:39.898969       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:39.905695       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:41.910051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:41.914717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:43.917682       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:43.921807       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:45.925508       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:45.929697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:47.933364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:47.938314       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:49.946658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:49.963589       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [d11384487d38dcb6fc74940486755eb9bd08fc8a3d4b5841e9a6d5f50afe8f69] <==
	I1202 16:16:57.665541       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1202 16:16:57.667804       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-534842 -n no-preload-534842
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-534842 -n no-preload-534842: exit status 2 (372.27897ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-534842 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-046271 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-046271 --alsologtostderr -v=1: exit status 80 (2.409063017s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-046271 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 16:18:13.999865  628450 out.go:360] Setting OutFile to fd 1 ...
	I1202 16:18:14.000252  628450 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:18:14.000267  628450 out.go:374] Setting ErrFile to fd 2...
	I1202 16:18:14.000276  628450 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:18:14.000649  628450 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 16:18:14.000962  628450 out.go:368] Setting JSON to false
	I1202 16:18:14.000989  628450 mustload.go:66] Loading cluster: embed-certs-046271
	I1202 16:18:14.001467  628450 config.go:182] Loaded profile config "embed-certs-046271": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 16:18:14.002101  628450 cli_runner.go:164] Run: docker container inspect embed-certs-046271 --format={{.State.Status}}
	I1202 16:18:14.026618  628450 host.go:66] Checking if "embed-certs-046271" exists ...
	I1202 16:18:14.026988  628450 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 16:18:14.112229  628450 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:75 OomKillDisable:false NGoroutines:84 SystemTime:2025-12-02 16:18:14.100832072 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 16:18:14.113003  628450 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-046271 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1202 16:18:14.114810  628450 out.go:179] * Pausing node embed-certs-046271 ... 
	I1202 16:18:14.115872  628450 host.go:66] Checking if "embed-certs-046271" exists ...
	I1202 16:18:14.116118  628450 ssh_runner.go:195] Run: systemctl --version
	I1202 16:18:14.116156  628450 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-046271
	I1202 16:18:14.136910  628450 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33250 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/embed-certs-046271/id_rsa Username:docker}
	I1202 16:18:14.241574  628450 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 16:18:14.256580  628450 pause.go:52] kubelet running: true
	I1202 16:18:14.256660  628450 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 16:18:14.441209  628450 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 16:18:14.441307  628450 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 16:18:14.521613  628450 cri.go:89] found id: "173719ad1715f37da7c13d636a2be3e7910afc81ad6d92a88ab2d5268e0b4ad0"
	I1202 16:18:14.521642  628450 cri.go:89] found id: "7a01cce632ce473b9f13d211e46baeb90d009c8a9471ff0c1ed098c62fef035b"
	I1202 16:18:14.521649  628450 cri.go:89] found id: "d5c5bd19a797757b7b8b4e7b9bd03cab31a65007d9e44dc873e73b592003f935"
	I1202 16:18:14.521654  628450 cri.go:89] found id: "378f281936523058e29cebe67cdf6b667293a6726f2577c5653947947d6210ab"
	I1202 16:18:14.521659  628450 cri.go:89] found id: "b7bbe4338eaa713a9f46532f0d5f4f8fdd4e7eb320af43e5d146a44067c124a7"
	I1202 16:18:14.521664  628450 cri.go:89] found id: "c2276216c1487f93e3277d2422350dd969b6a1c3c3470ca0ad9cf54e25deb70f"
	I1202 16:18:14.521668  628450 cri.go:89] found id: "16ad18068e5f3c997cc9fd8d07b82668917afab1c7be18e0282d7eaaa341d8c1"
	I1202 16:18:14.521672  628450 cri.go:89] found id: "3bf3111e9436304c788bb6ef52a85daf72acb7556f1bd1e4dbd20f1c48b40884"
	I1202 16:18:14.521676  628450 cri.go:89] found id: "698ef956828ff2ca307684a986b76f4d7810277a835e5153b0a6cfc108ff4852"
	I1202 16:18:14.521687  628450 cri.go:89] found id: "286ad93e33bb4ff3d193a144de9658655d06b742be65e682ec9fa7e8d5f3a8f4"
	I1202 16:18:14.521696  628450 cri.go:89] found id: "4e3969029638e659ef754301ede73fb0488601bf95a80b6e49bd77a95e8d801f"
	I1202 16:18:14.521700  628450 cri.go:89] found id: ""
	I1202 16:18:14.521749  628450 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 16:18:14.534991  628450 retry.go:31] will retry after 133.799671ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T16:18:14Z" level=error msg="open /run/runc: no such file or directory"
	I1202 16:18:14.669517  628450 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 16:18:14.683209  628450 pause.go:52] kubelet running: false
	I1202 16:18:14.683357  628450 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 16:18:14.821371  628450 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 16:18:14.821500  628450 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 16:18:14.888585  628450 cri.go:89] found id: "173719ad1715f37da7c13d636a2be3e7910afc81ad6d92a88ab2d5268e0b4ad0"
	I1202 16:18:14.888607  628450 cri.go:89] found id: "7a01cce632ce473b9f13d211e46baeb90d009c8a9471ff0c1ed098c62fef035b"
	I1202 16:18:14.888612  628450 cri.go:89] found id: "d5c5bd19a797757b7b8b4e7b9bd03cab31a65007d9e44dc873e73b592003f935"
	I1202 16:18:14.888616  628450 cri.go:89] found id: "378f281936523058e29cebe67cdf6b667293a6726f2577c5653947947d6210ab"
	I1202 16:18:14.888619  628450 cri.go:89] found id: "b7bbe4338eaa713a9f46532f0d5f4f8fdd4e7eb320af43e5d146a44067c124a7"
	I1202 16:18:14.888622  628450 cri.go:89] found id: "c2276216c1487f93e3277d2422350dd969b6a1c3c3470ca0ad9cf54e25deb70f"
	I1202 16:18:14.888625  628450 cri.go:89] found id: "16ad18068e5f3c997cc9fd8d07b82668917afab1c7be18e0282d7eaaa341d8c1"
	I1202 16:18:14.888628  628450 cri.go:89] found id: "3bf3111e9436304c788bb6ef52a85daf72acb7556f1bd1e4dbd20f1c48b40884"
	I1202 16:18:14.888631  628450 cri.go:89] found id: "698ef956828ff2ca307684a986b76f4d7810277a835e5153b0a6cfc108ff4852"
	I1202 16:18:14.888637  628450 cri.go:89] found id: "286ad93e33bb4ff3d193a144de9658655d06b742be65e682ec9fa7e8d5f3a8f4"
	I1202 16:18:14.888646  628450 cri.go:89] found id: "4e3969029638e659ef754301ede73fb0488601bf95a80b6e49bd77a95e8d801f"
	I1202 16:18:14.888649  628450 cri.go:89] found id: ""
	I1202 16:18:14.888691  628450 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 16:18:14.901088  628450 retry.go:31] will retry after 416.179747ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T16:18:14Z" level=error msg="open /run/runc: no such file or directory"
	I1202 16:18:15.317714  628450 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 16:18:15.330614  628450 pause.go:52] kubelet running: false
	I1202 16:18:15.330698  628450 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 16:18:15.481324  628450 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 16:18:15.481403  628450 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 16:18:15.555205  628450 cri.go:89] found id: "173719ad1715f37da7c13d636a2be3e7910afc81ad6d92a88ab2d5268e0b4ad0"
	I1202 16:18:15.555228  628450 cri.go:89] found id: "7a01cce632ce473b9f13d211e46baeb90d009c8a9471ff0c1ed098c62fef035b"
	I1202 16:18:15.555232  628450 cri.go:89] found id: "d5c5bd19a797757b7b8b4e7b9bd03cab31a65007d9e44dc873e73b592003f935"
	I1202 16:18:15.555236  628450 cri.go:89] found id: "378f281936523058e29cebe67cdf6b667293a6726f2577c5653947947d6210ab"
	I1202 16:18:15.555239  628450 cri.go:89] found id: "b7bbe4338eaa713a9f46532f0d5f4f8fdd4e7eb320af43e5d146a44067c124a7"
	I1202 16:18:15.555242  628450 cri.go:89] found id: "c2276216c1487f93e3277d2422350dd969b6a1c3c3470ca0ad9cf54e25deb70f"
	I1202 16:18:15.555245  628450 cri.go:89] found id: "16ad18068e5f3c997cc9fd8d07b82668917afab1c7be18e0282d7eaaa341d8c1"
	I1202 16:18:15.555248  628450 cri.go:89] found id: "3bf3111e9436304c788bb6ef52a85daf72acb7556f1bd1e4dbd20f1c48b40884"
	I1202 16:18:15.555251  628450 cri.go:89] found id: "698ef956828ff2ca307684a986b76f4d7810277a835e5153b0a6cfc108ff4852"
	I1202 16:18:15.555270  628450 cri.go:89] found id: "286ad93e33bb4ff3d193a144de9658655d06b742be65e682ec9fa7e8d5f3a8f4"
	I1202 16:18:15.555276  628450 cri.go:89] found id: "4e3969029638e659ef754301ede73fb0488601bf95a80b6e49bd77a95e8d801f"
	I1202 16:18:15.555279  628450 cri.go:89] found id: ""
	I1202 16:18:15.555335  628450 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 16:18:15.570169  628450 retry.go:31] will retry after 519.257364ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T16:18:15Z" level=error msg="open /run/runc: no such file or directory"
	I1202 16:18:16.089899  628450 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 16:18:16.103052  628450 pause.go:52] kubelet running: false
	I1202 16:18:16.103106  628450 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 16:18:16.243824  628450 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 16:18:16.243905  628450 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 16:18:16.311739  628450 cri.go:89] found id: "173719ad1715f37da7c13d636a2be3e7910afc81ad6d92a88ab2d5268e0b4ad0"
	I1202 16:18:16.311767  628450 cri.go:89] found id: "7a01cce632ce473b9f13d211e46baeb90d009c8a9471ff0c1ed098c62fef035b"
	I1202 16:18:16.311773  628450 cri.go:89] found id: "d5c5bd19a797757b7b8b4e7b9bd03cab31a65007d9e44dc873e73b592003f935"
	I1202 16:18:16.311778  628450 cri.go:89] found id: "378f281936523058e29cebe67cdf6b667293a6726f2577c5653947947d6210ab"
	I1202 16:18:16.311783  628450 cri.go:89] found id: "b7bbe4338eaa713a9f46532f0d5f4f8fdd4e7eb320af43e5d146a44067c124a7"
	I1202 16:18:16.311788  628450 cri.go:89] found id: "c2276216c1487f93e3277d2422350dd969b6a1c3c3470ca0ad9cf54e25deb70f"
	I1202 16:18:16.311793  628450 cri.go:89] found id: "16ad18068e5f3c997cc9fd8d07b82668917afab1c7be18e0282d7eaaa341d8c1"
	I1202 16:18:16.311798  628450 cri.go:89] found id: "3bf3111e9436304c788bb6ef52a85daf72acb7556f1bd1e4dbd20f1c48b40884"
	I1202 16:18:16.311802  628450 cri.go:89] found id: "698ef956828ff2ca307684a986b76f4d7810277a835e5153b0a6cfc108ff4852"
	I1202 16:18:16.311815  628450 cri.go:89] found id: "286ad93e33bb4ff3d193a144de9658655d06b742be65e682ec9fa7e8d5f3a8f4"
	I1202 16:18:16.311824  628450 cri.go:89] found id: "4e3969029638e659ef754301ede73fb0488601bf95a80b6e49bd77a95e8d801f"
	I1202 16:18:16.311829  628450 cri.go:89] found id: ""
	I1202 16:18:16.311876  628450 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 16:18:16.325940  628450 out.go:203] 
	W1202 16:18:16.327206  628450 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T16:18:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T16:18:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 16:18:16.327222  628450 out.go:285] * 
	* 
	W1202 16:18:16.331496  628450 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 16:18:16.332746  628450 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-046271 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-046271
helpers_test.go:243: (dbg) docker inspect embed-certs-046271:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c270563500586ca0092f2a49edbeca9b376c5254a06f29ac0e88ce01fd93d310",
	        "Created": "2025-12-02T16:16:07.197943832Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 615383,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T16:17:13.655002956Z",
	            "FinishedAt": "2025-12-02T16:17:11.235361652Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/c270563500586ca0092f2a49edbeca9b376c5254a06f29ac0e88ce01fd93d310/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c270563500586ca0092f2a49edbeca9b376c5254a06f29ac0e88ce01fd93d310/hostname",
	        "HostsPath": "/var/lib/docker/containers/c270563500586ca0092f2a49edbeca9b376c5254a06f29ac0e88ce01fd93d310/hosts",
	        "LogPath": "/var/lib/docker/containers/c270563500586ca0092f2a49edbeca9b376c5254a06f29ac0e88ce01fd93d310/c270563500586ca0092f2a49edbeca9b376c5254a06f29ac0e88ce01fd93d310-json.log",
	        "Name": "/embed-certs-046271",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-046271:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-046271",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c270563500586ca0092f2a49edbeca9b376c5254a06f29ac0e88ce01fd93d310",
	                "LowerDir": "/var/lib/docker/overlay2/12561fe812efc0a7100c7e89c65c08692ffbc64b594cbfd37abbc22239f7f12c-init/diff:/var/lib/docker/overlay2/ab98578cee54140c21ba2edb7c02601b9799fbaa027f05ce4daaae66d198c082/diff",
	                "MergedDir": "/var/lib/docker/overlay2/12561fe812efc0a7100c7e89c65c08692ffbc64b594cbfd37abbc22239f7f12c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/12561fe812efc0a7100c7e89c65c08692ffbc64b594cbfd37abbc22239f7f12c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/12561fe812efc0a7100c7e89c65c08692ffbc64b594cbfd37abbc22239f7f12c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-046271",
	                "Source": "/var/lib/docker/volumes/embed-certs-046271/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-046271",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-046271",
	                "name.minikube.sigs.k8s.io": "embed-certs-046271",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "bd9aaae1bf41337c86ebcb0add389c81ee2e208ffc07ac37f577647496cd92ce",
	            "SandboxKey": "/var/run/docker/netns/bd9aaae1bf41",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33250"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33251"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33254"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33252"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33253"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-046271": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f242ea03e26ef86f9adac97f285055eeae57f7a447eb51a12604c316daba1ca0",
	                    "EndpointID": "1815b42361b99c183011d80cd98a51a0e7a8e723b177c92e069c4f2e724dfbb0",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "86:08:0a:24:63:8a",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-046271",
	                        "c27056350058"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-046271 -n embed-certs-046271
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-046271 -n embed-certs-046271: exit status 2 (407.672692ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-046271 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-046271 logs -n 25: (1.163942768s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable metrics-server -p no-preload-534842 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	│ stop    │ -p no-preload-534842 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-380588 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ start   │ -p old-k8s-version-380588 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:17 UTC │
	│ addons  │ enable dashboard -p no-preload-534842 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ start   │ -p no-preload-534842 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:17 UTC │
	│ addons  │ enable metrics-server -p embed-certs-046271 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	│ stop    │ -p embed-certs-046271 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:17 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-806420 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-806420 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-046271 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ start   │ -p embed-certs-046271 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:18 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-806420 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ start   │ -p default-k8s-diff-port-806420 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:18 UTC │
	│ image   │ old-k8s-version-380588 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ pause   │ -p old-k8s-version-380588 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │                     │
	│ image   │ no-preload-534842 image list --format=json                                                                                                                                                                                                           │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ pause   │ -p no-preload-534842 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │                     │
	│ delete  │ -p old-k8s-version-380588                                                                                                                                                                                                                            │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ delete  │ -p no-preload-534842                                                                                                                                                                                                                                 │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ delete  │ -p old-k8s-version-380588                                                                                                                                                                                                                            │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ start   │ -p newest-cni-682353 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-682353            │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │                     │
	│ delete  │ -p no-preload-534842                                                                                                                                                                                                                                 │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ image   │ embed-certs-046271 image list --format=json                                                                                                                                                                                                          │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │ 02 Dec 25 16:18 UTC │
	│ pause   │ -p embed-certs-046271 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 16:17:52
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 16:17:52.644799  624315 out.go:360] Setting OutFile to fd 1 ...
	I1202 16:17:52.644911  624315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:17:52.644919  624315 out.go:374] Setting ErrFile to fd 2...
	I1202 16:17:52.644923  624315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:17:52.645119  624315 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 16:17:52.645660  624315 out.go:368] Setting JSON to false
	I1202 16:17:52.646996  624315 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":10814,"bootTime":1764681459,"procs":344,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 16:17:52.647061  624315 start.go:143] virtualization: kvm guest
	I1202 16:17:52.649119  624315 out.go:179] * [newest-cni-682353] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 16:17:52.650307  624315 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 16:17:52.650341  624315 notify.go:221] Checking for updates...
	I1202 16:17:52.652574  624315 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 16:17:52.653891  624315 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 16:17:52.655069  624315 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-264555/.minikube
	I1202 16:17:52.656462  624315 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 16:17:52.658069  624315 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 16:17:52.659797  624315 config.go:182] Loaded profile config "default-k8s-diff-port-806420": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 16:17:52.659881  624315 config.go:182] Loaded profile config "embed-certs-046271": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 16:17:52.659969  624315 config.go:182] Loaded profile config "no-preload-534842": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 16:17:52.660075  624315 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 16:17:52.686164  624315 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 16:17:52.686294  624315 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 16:17:52.756277  624315 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-02 16:17:52.744065656 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 16:17:52.756384  624315 docker.go:319] overlay module found
	I1202 16:17:52.760989  624315 out.go:179] * Using the docker driver based on user configuration
	I1202 16:17:52.762385  624315 start.go:309] selected driver: docker
	I1202 16:17:52.762402  624315 start.go:927] validating driver "docker" against <nil>
	I1202 16:17:52.762413  624315 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 16:17:52.762997  624315 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 16:17:52.838284  624315 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-12-02 16:17:52.827987453 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 16:17:52.838514  624315 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1202 16:17:52.838550  624315 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1202 16:17:52.838830  624315 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1202 16:17:52.844583  624315 out.go:179] * Using Docker driver with root privileges
	I1202 16:17:52.845732  624315 cni.go:84] Creating CNI manager for ""
	I1202 16:17:52.845789  624315 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 16:17:52.845805  624315 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 16:17:52.845898  624315 start.go:353] cluster config:
	{Name:newest-cni-682353 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-682353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 16:17:52.847341  624315 out.go:179] * Starting "newest-cni-682353" primary control-plane node in "newest-cni-682353" cluster
	I1202 16:17:52.848449  624315 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 16:17:52.850119  624315 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	W1202 16:17:48.735446  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	W1202 16:17:51.235172  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	I1202 16:17:52.851463  624315 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 16:17:52.851567  624315 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 16:17:52.877300  624315 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 16:17:52.877327  624315 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 16:17:53.534100  624315 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	W1202 16:17:53.777545  624315 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1202 16:17:53.777737  624315 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/config.json ...
	I1202 16:17:53.777776  624315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/config.json: {Name:mk1da2fd97ec61d8b0621ec4e77abec4e577dd62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:17:53.777919  624315 cache.go:107] acquiring lock: {Name:mk6b8eeb5270fa67a5a87f892f37de1ae4805f75 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:17:53.777969  624315 cache.go:243] Successfully downloaded all kic artifacts
	I1202 16:17:53.777962  624315 cache.go:107] acquiring lock: {Name:mk3f4d40fdf359ce0573637a386f14c0a310cdc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:17:53.778008  624315 start.go:360] acquireMachinesLock for newest-cni-682353: {Name:mkfed8f02380af59f92aa0b6f8ae02a29dbe0c8e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:17:53.777991  624315 cache.go:107] acquiring lock: {Name:mka2aa325920dfb2720f9036278856e8dac95446 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:17:53.778031  624315 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 16:17:53.778042  624315 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 144.394µs
	I1202 16:17:53.778050  624315 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 16:17:53.778058  624315 start.go:364] duration metric: took 37.681µs to acquireMachinesLock for "newest-cni-682353"
	I1202 16:17:53.778060  624315 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 16:17:53.778062  624315 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 116.303µs
	I1202 16:17:53.778072  624315 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 16:17:53.778078  624315 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 16:17:53.778080  624315 cache.go:107] acquiring lock: {Name:mkce5d795e0ca01a9ee3d674d001cd6e04bbbfba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:17:53.778090  624315 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 120.36µs
	I1202 16:17:53.778124  624315 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 16:17:53.778077  624315 start.go:93] Provisioning new machine with config: &{Name:newest-cni-682353 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-682353 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 16:17:53.778170  624315 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1202 16:17:53.778176  624315 start.go:125] createHost starting for "" (driver="docker")
	I1202 16:17:53.778182  624315 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 105.339µs
	I1202 16:17:53.778196  624315 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 16:17:53.778173  624315 cache.go:107] acquiring lock: {Name:mk91bc91bcc535b3edd8200bf0c06e4d97781487 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:17:53.778225  624315 cache.go:107] acquiring lock: {Name:mk17b77bf762047097cbe060b18dc85ae78a9727 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:17:53.778251  624315 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 16:17:53.778262  624315 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 102.828µs
	I1202 16:17:53.778276  624315 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 16:17:53.778262  624315 cache.go:107] acquiring lock: {Name:mkec45cdfdbdafc0ef1296b9d77662a50add1cdf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:17:53.778298  624315 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 16:17:53.778312  624315 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 111.218µs
	I1202 16:17:53.778316  624315 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 16:17:53.778322  624315 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 16:17:53.778315  624315 cache.go:107] acquiring lock: {Name:mk821cef64e8468a2739d03d2e1019ac980bf2cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:17:53.778326  624315 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 75.311µs
	W1202 16:17:53.692416  617021 pod_ready.go:104] pod "coredns-66bc5c9577-6h6nr" is not "Ready", error: <nil>
	W1202 16:17:56.191595  617021 pod_ready.go:104] pod "coredns-66bc5c9577-6h6nr" is not "Ready", error: <nil>
	I1202 16:17:53.778371  624315 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 16:17:53.778382  624315 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 75.582µs
	I1202 16:17:53.778390  624315 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 16:17:53.778338  624315 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 16:17:53.778410  624315 cache.go:87] Successfully saved all images to host disk.
	I1202 16:17:53.788193  624315 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1202 16:17:53.788477  624315 start.go:159] libmachine.API.Create for "newest-cni-682353" (driver="docker")
	I1202 16:17:53.788525  624315 client.go:173] LocalClient.Create starting
	I1202 16:17:53.788618  624315 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem
	I1202 16:17:53.788655  624315 main.go:143] libmachine: Decoding PEM data...
	I1202 16:17:53.788673  624315 main.go:143] libmachine: Parsing certificate...
	I1202 16:17:53.788729  624315 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem
	I1202 16:17:53.788748  624315 main.go:143] libmachine: Decoding PEM data...
	I1202 16:17:53.788759  624315 main.go:143] libmachine: Parsing certificate...
	I1202 16:17:53.789143  624315 cli_runner.go:164] Run: docker network inspect newest-cni-682353 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1202 16:17:53.809974  624315 cli_runner.go:211] docker network inspect newest-cni-682353 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1202 16:17:53.810064  624315 network_create.go:284] running [docker network inspect newest-cni-682353] to gather additional debugging logs...
	I1202 16:17:53.810092  624315 cli_runner.go:164] Run: docker network inspect newest-cni-682353
	W1202 16:17:53.830774  624315 cli_runner.go:211] docker network inspect newest-cni-682353 returned with exit code 1
	I1202 16:17:53.830803  624315 network_create.go:287] error running [docker network inspect newest-cni-682353]: docker network inspect newest-cni-682353: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-682353 not found
	I1202 16:17:53.830820  624315 network_create.go:289] output of [docker network inspect newest-cni-682353]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-682353 not found
	
	** /stderr **
	I1202 16:17:53.830928  624315 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 16:17:53.851114  624315 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-59c4d474daec IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:20:cf:7a:79:c5} reservation:<nil>}
	I1202 16:17:53.851815  624315 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-208582b1a4af IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:26:5b:fe:2d:46:75} reservation:<nil>}
	I1202 16:17:53.852643  624315 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-230a00bd70ce IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:fe:8f:10:7f:8e:d3} reservation:<nil>}
	I1202 16:17:53.853252  624315 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-f242ea03e26e IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:3a:4d:9d:95:a5:56} reservation:<nil>}
	I1202 16:17:53.853946  624315 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-71c0f0496cc5 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:fa:9c:49:d2:0f:a1} reservation:<nil>}
	I1202 16:17:53.854357  624315 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-26f54f8ab80d IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:22:17:9c:97:61:b0} reservation:<nil>}
	I1202 16:17:53.854995  624315 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0027bef70}
	I1202 16:17:53.855021  624315 network_create.go:124] attempt to create docker network newest-cni-682353 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1202 16:17:53.855077  624315 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-682353 newest-cni-682353
	I1202 16:17:53.916078  624315 network_create.go:108] docker network newest-cni-682353 192.168.103.0/24 created
	I1202 16:17:53.916113  624315 kic.go:121] calculated static IP "192.168.103.2" for the "newest-cni-682353" container
	I1202 16:17:53.916193  624315 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1202 16:17:53.938300  624315 cli_runner.go:164] Run: docker volume create newest-cni-682353 --label name.minikube.sigs.k8s.io=newest-cni-682353 --label created_by.minikube.sigs.k8s.io=true
	I1202 16:17:53.959487  624315 oci.go:103] Successfully created a docker volume newest-cni-682353
	I1202 16:17:53.959565  624315 cli_runner.go:164] Run: docker run --rm --name newest-cni-682353-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-682353 --entrypoint /usr/bin/test -v newest-cni-682353:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1202 16:17:54.413877  624315 oci.go:107] Successfully prepared a docker volume newest-cni-682353
	I1202 16:17:54.413932  624315 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	W1202 16:17:54.414025  624315 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1202 16:17:54.414062  624315 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1202 16:17:54.414111  624315 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1202 16:17:54.479852  624315 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-682353 --name newest-cni-682353 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-682353 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-682353 --network newest-cni-682353 --ip 192.168.103.2 --volume newest-cni-682353:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1202 16:17:54.770265  624315 cli_runner.go:164] Run: docker container inspect newest-cni-682353 --format={{.State.Running}}
	I1202 16:17:54.793036  624315 cli_runner.go:164] Run: docker container inspect newest-cni-682353 --format={{.State.Status}}
	I1202 16:17:54.812286  624315 cli_runner.go:164] Run: docker exec newest-cni-682353 stat /var/lib/dpkg/alternatives/iptables
	I1202 16:17:54.862730  624315 oci.go:144] the created container "newest-cni-682353" has a running status.
	I1202 16:17:54.862790  624315 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22021-264555/.minikube/machines/newest-cni-682353/id_rsa...
	I1202 16:17:55.048010  624315 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22021-264555/.minikube/machines/newest-cni-682353/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1202 16:17:55.089312  624315 cli_runner.go:164] Run: docker container inspect newest-cni-682353 --format={{.State.Status}}
	I1202 16:17:55.111220  624315 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1202 16:17:55.111254  624315 kic_runner.go:114] Args: [docker exec --privileged newest-cni-682353 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1202 16:17:55.162548  624315 cli_runner.go:164] Run: docker container inspect newest-cni-682353 --format={{.State.Status}}
	I1202 16:17:55.185203  624315 machine.go:94] provisionDockerMachine start ...
	I1202 16:17:55.185292  624315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:17:55.206996  624315 main.go:143] libmachine: Using SSH client type: native
	I1202 16:17:55.207255  624315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33260 <nil> <nil>}
	I1202 16:17:55.207290  624315 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 16:17:55.350585  624315 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-682353
	
	I1202 16:17:55.350623  624315 ubuntu.go:182] provisioning hostname "newest-cni-682353"
	I1202 16:17:55.350718  624315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:17:55.371361  624315 main.go:143] libmachine: Using SSH client type: native
	I1202 16:17:55.371720  624315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33260 <nil> <nil>}
	I1202 16:17:55.371746  624315 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-682353 && echo "newest-cni-682353" | sudo tee /etc/hostname
	I1202 16:17:55.527645  624315 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-682353
	
	I1202 16:17:55.527735  624315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:17:55.546178  624315 main.go:143] libmachine: Using SSH client type: native
	I1202 16:17:55.546465  624315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33260 <nil> <nil>}
	I1202 16:17:55.546490  624315 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-682353' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-682353/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-682353' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 16:17:55.688466  624315 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 16:17:55.688504  624315 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-264555/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-264555/.minikube}
	I1202 16:17:55.688529  624315 ubuntu.go:190] setting up certificates
	I1202 16:17:55.688543  624315 provision.go:84] configureAuth start
	I1202 16:17:55.688607  624315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-682353
	I1202 16:17:55.707288  624315 provision.go:143] copyHostCerts
	I1202 16:17:55.707359  624315 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem, removing ...
	I1202 16:17:55.707372  624315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem
	I1202 16:17:55.707483  624315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem (1082 bytes)
	I1202 16:17:55.707608  624315 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem, removing ...
	I1202 16:17:55.707622  624315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem
	I1202 16:17:55.707663  624315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem (1123 bytes)
	I1202 16:17:55.707741  624315 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem, removing ...
	I1202 16:17:55.707750  624315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem
	I1202 16:17:55.707784  624315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem (1675 bytes)
	I1202 16:17:55.707854  624315 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem org=jenkins.newest-cni-682353 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-682353]
	I1202 16:17:55.874758  624315 provision.go:177] copyRemoteCerts
	I1202 16:17:55.874827  624315 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 16:17:55.874887  624315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:17:55.893962  624315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33260 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/newest-cni-682353/id_rsa Username:docker}
	I1202 16:17:55.995114  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 16:17:56.016141  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 16:17:56.034903  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 16:17:56.053997  624315 provision.go:87] duration metric: took 365.435069ms to configureAuth
	I1202 16:17:56.054029  624315 ubuntu.go:206] setting minikube options for container-runtime
	I1202 16:17:56.054223  624315 config.go:182] Loaded profile config "newest-cni-682353": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 16:17:56.054345  624315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:17:56.073989  624315 main.go:143] libmachine: Using SSH client type: native
	I1202 16:17:56.074282  624315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33260 <nil> <nil>}
	I1202 16:17:56.074316  624315 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 16:17:56.371749  624315 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 16:17:56.371776  624315 machine.go:97] duration metric: took 1.186549675s to provisionDockerMachine
	I1202 16:17:56.371787  624315 client.go:176] duration metric: took 2.583251832s to LocalClient.Create
	I1202 16:17:56.371809  624315 start.go:167] duration metric: took 2.58333403s to libmachine.API.Create "newest-cni-682353"
	I1202 16:17:56.371819  624315 start.go:293] postStartSetup for "newest-cni-682353" (driver="docker")
	I1202 16:17:56.371833  624315 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 16:17:56.371892  624315 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 16:17:56.371933  624315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:17:56.393135  624315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33260 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/newest-cni-682353/id_rsa Username:docker}
	I1202 16:17:56.496995  624315 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 16:17:56.501038  624315 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 16:17:56.501074  624315 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 16:17:56.501087  624315 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-264555/.minikube/addons for local assets ...
	I1202 16:17:56.501151  624315 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-264555/.minikube/files for local assets ...
	I1202 16:17:56.501258  624315 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem -> 2680992.pem in /etc/ssl/certs
	I1202 16:17:56.501378  624315 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 16:17:56.509784  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem --> /etc/ssl/certs/2680992.pem (1708 bytes)
	I1202 16:17:56.531088  624315 start.go:296] duration metric: took 159.252805ms for postStartSetup
	I1202 16:17:56.531578  624315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-682353
	I1202 16:17:56.551457  624315 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/config.json ...
	I1202 16:17:56.551748  624315 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 16:17:56.551795  624315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:17:56.569785  624315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33260 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/newest-cni-682353/id_rsa Username:docker}
	I1202 16:17:56.667711  624315 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 16:17:56.672371  624315 start.go:128] duration metric: took 2.894159147s to createHost
	I1202 16:17:56.672398  624315 start.go:83] releasing machines lock for "newest-cni-682353", held for 2.894331813s
	I1202 16:17:56.672480  624315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-682353
	I1202 16:17:56.691026  624315 ssh_runner.go:195] Run: cat /version.json
	I1202 16:17:56.691068  624315 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 16:17:56.691085  624315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:17:56.691142  624315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:17:56.709964  624315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33260 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/newest-cni-682353/id_rsa Username:docker}
	I1202 16:17:56.710241  624315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33260 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/newest-cni-682353/id_rsa Username:docker}
	I1202 16:17:56.863891  624315 ssh_runner.go:195] Run: systemctl --version
	I1202 16:17:56.870804  624315 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 16:17:56.905199  624315 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 16:17:56.910177  624315 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 16:17:56.910251  624315 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 16:17:56.939092  624315 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1202 16:17:56.939115  624315 start.go:496] detecting cgroup driver to use...
	I1202 16:17:56.939145  624315 detect.go:190] detected "systemd" cgroup driver on host os
	I1202 16:17:56.939191  624315 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 16:17:56.955610  624315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 16:17:56.969366  624315 docker.go:218] disabling cri-docker service (if available) ...
	I1202 16:17:56.969454  624315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 16:17:56.986554  624315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 16:17:57.004854  624315 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 16:17:57.093269  624315 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 16:17:57.182306  624315 docker.go:234] disabling docker service ...
	I1202 16:17:57.182377  624315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 16:17:57.203634  624315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 16:17:57.217007  624315 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 16:17:57.302606  624315 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 16:17:57.390234  624315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 16:17:57.403746  624315 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 16:17:57.418993  624315 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 16:17:57.419043  624315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:57.429606  624315 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1202 16:17:57.429677  624315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:57.440021  624315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:57.449681  624315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:57.459146  624315 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 16:17:57.467927  624315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:57.477178  624315 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:57.491151  624315 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:57.501347  624315 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 16:17:57.509339  624315 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 16:17:57.517116  624315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 16:17:57.601755  624315 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 16:17:58.048791  624315 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 16:17:58.048863  624315 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 16:17:58.053242  624315 start.go:564] Will wait 60s for crictl version
	I1202 16:17:58.053300  624315 ssh_runner.go:195] Run: which crictl
	I1202 16:17:58.057049  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 16:17:58.081825  624315 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 16:17:58.081931  624315 ssh_runner.go:195] Run: crio --version
	I1202 16:17:58.110886  624315 ssh_runner.go:195] Run: crio --version
	I1202 16:17:58.141240  624315 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1202 16:17:58.142295  624315 cli_runner.go:164] Run: docker network inspect newest-cni-682353 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 16:17:58.161244  624315 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1202 16:17:58.165842  624315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 16:17:58.178580  624315 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1202 16:17:53.734533  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	W1202 16:17:56.234121  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	W1202 16:17:58.734313  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	I1202 16:18:00.733749  615191 pod_ready.go:94] pod "coredns-66bc5c9577-f2vhx" is "Ready"
	I1202 16:18:00.733779  615191 pod_ready.go:86] duration metric: took 37.005697217s for pod "coredns-66bc5c9577-f2vhx" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:00.736453  615191 pod_ready.go:83] waiting for pod "etcd-embed-certs-046271" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:00.740534  615191 pod_ready.go:94] pod "etcd-embed-certs-046271" is "Ready"
	I1202 16:18:00.740565  615191 pod_ready.go:86] duration metric: took 4.089173ms for pod "etcd-embed-certs-046271" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:00.742727  615191 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-046271" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:00.746692  615191 pod_ready.go:94] pod "kube-apiserver-embed-certs-046271" is "Ready"
	I1202 16:18:00.746764  615191 pod_ready.go:86] duration metric: took 4.006581ms for pod "kube-apiserver-embed-certs-046271" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:00.748680  615191 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-046271" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:00.933918  615191 pod_ready.go:94] pod "kube-controller-manager-embed-certs-046271" is "Ready"
	I1202 16:18:00.933953  615191 pod_ready.go:86] duration metric: took 185.250717ms for pod "kube-controller-manager-embed-certs-046271" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:01.132237  615191 pod_ready.go:83] waiting for pod "kube-proxy-q9pxb" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:01.532378  615191 pod_ready.go:94] pod "kube-proxy-q9pxb" is "Ready"
	I1202 16:18:01.532413  615191 pod_ready.go:86] duration metric: took 400.148403ms for pod "kube-proxy-q9pxb" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:01.732266  615191 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-046271" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:02.132067  615191 pod_ready.go:94] pod "kube-scheduler-embed-certs-046271" is "Ready"
	I1202 16:18:02.132099  615191 pod_ready.go:86] duration metric: took 399.802212ms for pod "kube-scheduler-embed-certs-046271" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:02.132116  615191 pod_ready.go:40] duration metric: took 38.407682171s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 16:18:02.187882  615191 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1202 16:18:02.190701  615191 out.go:179] * Done! kubectl is now configured to use "embed-certs-046271" cluster and "default" namespace by default
	W1202 16:17:58.192329  617021 pod_ready.go:104] pod "coredns-66bc5c9577-6h6nr" is not "Ready", error: <nil>
	W1202 16:18:00.691537  617021 pod_ready.go:104] pod "coredns-66bc5c9577-6h6nr" is not "Ready", error: <nil>
	I1202 16:17:58.179767  624315 kubeadm.go:884] updating cluster {Name:newest-cni-682353 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-682353 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 16:17:58.179901  624315 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 16:17:58.179958  624315 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 16:17:58.206197  624315 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1202 16:17:58.206227  624315 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1202 16:17:58.206292  624315 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 16:17:58.206301  624315 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 16:17:58.206310  624315 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 16:17:58.206335  624315 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 16:17:58.206332  624315 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1202 16:17:58.206358  624315 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 16:17:58.206378  624315 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 16:17:58.206345  624315 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1202 16:17:58.207688  624315 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1202 16:17:58.207691  624315 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 16:17:58.207691  624315 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 16:17:58.207693  624315 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 16:17:58.207691  624315 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 16:17:58.207689  624315 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 16:17:58.207798  624315 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 16:17:58.207691  624315 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1202 16:17:58.389649  624315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 16:17:58.398497  624315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1202 16:17:58.416062  624315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 16:17:58.417018  624315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1202 16:17:58.427952  624315 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46" in container runtime
	I1202 16:17:58.428006  624315 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 16:17:58.428060  624315 ssh_runner.go:195] Run: which crictl
	I1202 16:17:58.428438  624315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1202 16:17:58.437511  624315 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1202 16:17:58.437565  624315 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1202 16:17:58.437615  624315 ssh_runner.go:195] Run: which crictl
	I1202 16:17:58.460811  624315 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1202 16:17:58.460830  624315 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b" in container runtime
	I1202 16:17:58.460863  624315 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 16:17:58.460867  624315 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 16:17:58.460871  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 16:17:58.460903  624315 ssh_runner.go:195] Run: which crictl
	I1202 16:17:58.460905  624315 ssh_runner.go:195] Run: which crictl
	I1202 16:17:58.468480  624315 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1202 16:17:58.468534  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 16:17:58.468536  624315 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1202 16:17:58.468689  624315 ssh_runner.go:195] Run: which crictl
	I1202 16:17:58.482843  624315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 16:17:58.490077  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 16:17:58.490149  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 16:17:58.490175  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 16:17:58.494233  624315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 16:17:58.502739  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 16:17:58.502866  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 16:17:58.534397  624315 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810" in container runtime
	I1202 16:17:58.534456  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 16:17:58.534461  624315 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 16:17:58.534509  624315 ssh_runner.go:195] Run: which crictl
	I1202 16:17:58.534571  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 16:17:58.534725  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 16:17:58.543764  624315 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc" in container runtime
	I1202 16:17:58.543814  624315 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 16:17:58.543823  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 16:17:58.543866  624315 ssh_runner.go:195] Run: which crictl
	I1202 16:17:58.546006  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 16:17:58.568541  624315 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1202 16:17:58.568608  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 16:17:58.568652  624315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 16:17:58.568944  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 16:17:58.569405  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 16:17:58.598889  624315 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0
	I1202 16:17:58.598908  624315 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1202 16:17:58.598916  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 16:17:58.598908  624315 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1202 16:17:58.599016  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (17239040 bytes)
	I1202 16:17:58.598911  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 16:17:58.598914  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 16:17:58.598995  624315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1202 16:17:58.598996  624315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 16:17:58.605101  624315 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1202 16:17:58.605197  624315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1202 16:17:58.660277  624315 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1202 16:17:58.660294  624315 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1202 16:17:58.660307  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 16:17:58.660308  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (27682304 bytes)
	I1202 16:17:58.660293  624315 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1202 16:17:58.660346  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1202 16:17:58.660376  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 16:17:58.660397  624315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1202 16:17:58.677906  624315 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1202 16:17:58.677939  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1202 16:17:58.732749  624315 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1202 16:17:58.732792  624315 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1202 16:17:58.732793  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1202 16:17:58.732887  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 16:17:58.732894  624315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 16:17:58.789503  624315 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1202 16:17:58.789541  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (25788928 bytes)
	I1202 16:17:58.813684  624315 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1202 16:17:58.813789  624315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 16:17:58.820547  624315 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1202 16:17:58.820618  624315 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1202 16:17:58.860692  624315 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1202 16:17:58.860733  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (23131648 bytes)
	I1202 16:17:59.246230  624315 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1202 16:17:59.246281  624315 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 16:17:59.246353  624315 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 16:17:59.486290  624315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 16:18:00.367652  624315 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.121268658s)
	I1202 16:18:00.367707  624315 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1202 16:18:00.367746  624315 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1202 16:18:00.367761  624315 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1202 16:18:00.367803  624315 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0
	I1202 16:18:00.367802  624315 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 16:18:00.367932  624315 ssh_runner.go:195] Run: which crictl
	I1202 16:18:01.667535  624315 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0: (1.299704769s)
	I1202 16:18:01.667566  624315 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1202 16:18:01.667582  624315 ssh_runner.go:235] Completed: which crictl: (1.299628705s)
	I1202 16:18:01.667642  624315 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1202 16:18:01.667689  624315 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1202 16:18:01.667644  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	W1202 16:18:03.191574  617021 pod_ready.go:104] pod "coredns-66bc5c9577-6h6nr" is not "Ready", error: <nil>
	W1202 16:18:05.191913  617021 pod_ready.go:104] pod "coredns-66bc5c9577-6h6nr" is not "Ready", error: <nil>
	W1202 16:18:07.192587  617021 pod_ready.go:104] pod "coredns-66bc5c9577-6h6nr" is not "Ready", error: <nil>
	I1202 16:18:02.923984  624315 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.256270074s)
	I1202 16:18:02.924014  624315 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1202 16:18:02.924039  624315 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 16:18:02.924074  624315 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.256318055s)
	I1202 16:18:02.924088  624315 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 16:18:02.924122  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 16:18:04.317999  624315 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.393888467s)
	I1202 16:18:04.318026  624315 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1202 16:18:04.318041  624315 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.393890267s)
	I1202 16:18:04.318057  624315 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 16:18:04.318112  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 16:18:04.318114  624315 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 16:18:04.344156  624315 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1202 16:18:04.344265  624315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1202 16:18:05.433824  624315 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.115591929s)
	I1202 16:18:05.433855  624315 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1202 16:18:05.433878  624315 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.089585497s)
	I1202 16:18:05.433893  624315 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 16:18:05.433913  624315 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1202 16:18:05.433937  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1202 16:18:05.433969  624315 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 16:18:06.583432  624315 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.149420095s)
	I1202 16:18:06.583469  624315 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1202 16:18:06.583499  624315 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1202 16:18:06.583549  624315 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1202 16:18:07.120218  624315 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1202 16:18:07.120270  624315 cache_images.go:125] Successfully loaded all cached images
	I1202 16:18:07.120277  624315 cache_images.go:94] duration metric: took 8.914034697s to LoadCachedImages
	I1202 16:18:07.120288  624315 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0-beta.0 crio true true} ...
	I1202 16:18:07.120393  624315 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-682353 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-682353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 16:18:07.120482  624315 ssh_runner.go:195] Run: crio config
	I1202 16:18:07.166513  624315 cni.go:84] Creating CNI manager for ""
	I1202 16:18:07.166534  624315 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 16:18:07.166549  624315 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1202 16:18:07.166572  624315 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-682353 NodeName:newest-cni-682353 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 16:18:07.166713  624315 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-682353"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 16:18:07.166783  624315 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 16:18:07.175146  624315 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1202 16:18:07.175204  624315 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 16:18:07.183178  624315 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256
	I1202 16:18:07.183195  624315 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1202 16:18:07.183241  624315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1202 16:18:07.183244  624315 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet.sha256
	I1202 16:18:07.183286  624315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1202 16:18:07.183302  624315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 16:18:07.188024  624315 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1202 16:18:07.188056  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (58589368 bytes)
	I1202 16:18:07.201415  624315 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1202 16:18:07.201452  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (72364216 bytes)
	I1202 16:18:07.201516  624315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1202 16:18:07.221248  624315 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1202 16:18:07.221288  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (58106148 bytes)
	I1202 16:18:07.716552  624315 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 16:18:07.724534  624315 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1202 16:18:07.737551  624315 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 16:18:07.778722  624315 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1202 16:18:07.792804  624315 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1202 16:18:07.796626  624315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 16:18:07.838782  624315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 16:18:07.929263  624315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 16:18:07.956172  624315 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353 for IP: 192.168.103.2
	I1202 16:18:07.956193  624315 certs.go:195] generating shared ca certs ...
	I1202 16:18:07.956208  624315 certs.go:227] acquiring lock for ca certs: {Name:mk039ff27816ff98157f54038cc23b17e408fc34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:18:07.956374  624315 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-264555/.minikube/ca.key
	I1202 16:18:07.956413  624315 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.key
	I1202 16:18:07.956436  624315 certs.go:257] generating profile certs ...
	I1202 16:18:07.956496  624315 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/client.key
	I1202 16:18:07.956510  624315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/client.crt with IP's: []
	I1202 16:18:08.055915  624315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/client.crt ...
	I1202 16:18:08.055950  624315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/client.crt: {Name:mkbae4e216b534e22a7a22b5211ba0f085fa0a0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:18:08.056133  624315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/client.key ...
	I1202 16:18:08.056145  624315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/client.key: {Name:mk01dd2149dcd5f6287686ae6bf7579abf16ae6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:18:08.056231  624315 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/apiserver.key.5833a0e0
	I1202 16:18:08.056247  624315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/apiserver.crt.5833a0e0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1202 16:18:08.454875  624315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/apiserver.crt.5833a0e0 ...
	I1202 16:18:08.454909  624315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/apiserver.crt.5833a0e0: {Name:mk34e2dbb313339f9326d6e80e3c7620a9f90d47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:18:08.455091  624315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/apiserver.key.5833a0e0 ...
	I1202 16:18:08.455107  624315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/apiserver.key.5833a0e0: {Name:mk521f77ecbe6526d4308034abb99ca52329446f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:18:08.455185  624315 certs.go:382] copying /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/apiserver.crt.5833a0e0 -> /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/apiserver.crt
	I1202 16:18:08.455260  624315 certs.go:386] copying /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/apiserver.key.5833a0e0 -> /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/apiserver.key
	I1202 16:18:08.455314  624315 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/proxy-client.key
	I1202 16:18:08.455328  624315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/proxy-client.crt with IP's: []
	I1202 16:18:08.725997  624315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/proxy-client.crt ...
	I1202 16:18:08.726029  624315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/proxy-client.crt: {Name:mk2542633dc1eea73aaea75c9b720c86ebeab857 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:18:08.726243  624315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/proxy-client.key ...
	I1202 16:18:08.726261  624315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/proxy-client.key: {Name:mkd92fd9f3993b30fa9a53ce61ae93d417dab751 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:18:08.726487  624315 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099.pem (1338 bytes)
	W1202 16:18:08.726533  624315 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099_empty.pem, impossibly tiny 0 bytes
	I1202 16:18:08.726543  624315 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem (1679 bytes)
	I1202 16:18:08.726568  624315 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem (1082 bytes)
	I1202 16:18:08.726598  624315 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem (1123 bytes)
	I1202 16:18:08.726621  624315 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem (1675 bytes)
	I1202 16:18:08.726661  624315 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem (1708 bytes)
	I1202 16:18:08.727246  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 16:18:08.746871  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 16:18:08.766275  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 16:18:08.786826  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 16:18:08.805357  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 16:18:08.823135  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 16:18:08.840900  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 16:18:08.858454  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 16:18:08.876051  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem --> /usr/share/ca-certificates/2680992.pem (1708 bytes)
	I1202 16:18:08.896345  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 16:18:08.914496  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099.pem --> /usr/share/ca-certificates/268099.pem (1338 bytes)
	I1202 16:18:08.933005  624315 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 16:18:08.946072  624315 ssh_runner.go:195] Run: openssl version
	I1202 16:18:08.952558  624315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2680992.pem && ln -fs /usr/share/ca-certificates/2680992.pem /etc/ssl/certs/2680992.pem"
	I1202 16:18:08.961641  624315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2680992.pem
	I1202 16:18:08.965532  624315 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 15:33 /usr/share/ca-certificates/2680992.pem
	I1202 16:18:08.965592  624315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2680992.pem
	I1202 16:18:09.000522  624315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2680992.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 16:18:09.010236  624315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 16:18:09.019164  624315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:18:09.023057  624315 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 15:16 /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:18:09.023101  624315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:18:09.058108  624315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 16:18:09.067191  624315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/268099.pem && ln -fs /usr/share/ca-certificates/268099.pem /etc/ssl/certs/268099.pem"
	I1202 16:18:09.075994  624315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/268099.pem
	I1202 16:18:09.079911  624315 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 15:33 /usr/share/ca-certificates/268099.pem
	I1202 16:18:09.079961  624315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/268099.pem
	I1202 16:18:09.114442  624315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/268099.pem /etc/ssl/certs/51391683.0"
	I1202 16:18:09.123256  624315 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 16:18:09.127070  624315 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 16:18:09.127130  624315 kubeadm.go:401] StartCluster: {Name:newest-cni-682353 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-682353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 16:18:09.127204  624315 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 16:18:09.127247  624315 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 16:18:09.154342  624315 cri.go:89] found id: ""
	I1202 16:18:09.154431  624315 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 16:18:09.162826  624315 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 16:18:09.170565  624315 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 16:18:09.170625  624315 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 16:18:09.178276  624315 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 16:18:09.178296  624315 kubeadm.go:158] found existing configuration files:
	
	I1202 16:18:09.178342  624315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 16:18:09.185869  624315 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 16:18:09.185937  624315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 16:18:09.194222  624315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 16:18:09.202327  624315 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 16:18:09.202390  624315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 16:18:09.210067  624315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 16:18:09.217901  624315 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 16:18:09.217973  624315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 16:18:09.225309  624315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 16:18:09.233081  624315 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 16:18:09.233139  624315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 16:18:09.240540  624315 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 16:18:09.277009  624315 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 16:18:09.277090  624315 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 16:18:09.343291  624315 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 16:18:09.343358  624315 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1202 16:18:09.343404  624315 kubeadm.go:319] OS: Linux
	I1202 16:18:09.343489  624315 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 16:18:09.343580  624315 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 16:18:09.343628  624315 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 16:18:09.343723  624315 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 16:18:09.343803  624315 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 16:18:09.343870  624315 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 16:18:09.343928  624315 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 16:18:09.343987  624315 kubeadm.go:319] CGROUPS_IO: enabled
	I1202 16:18:09.413832  624315 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 16:18:09.414018  624315 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 16:18:09.414143  624315 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 16:18:09.428395  624315 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 16:18:09.431691  624315 out.go:252]   - Generating certificates and keys ...
	I1202 16:18:09.431798  624315 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 16:18:09.431884  624315 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 16:18:09.551712  624315 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1202 16:18:09.619865  624315 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1202 16:18:09.700125  624315 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1202 16:18:09.785826  624315 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1202 16:18:10.002211  624315 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1202 16:18:10.002452  624315 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-682353] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1202 16:18:10.062821  624315 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1202 16:18:10.062997  624315 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-682353] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1202 16:18:10.262133  624315 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1202 16:18:10.339928  624315 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1202 16:18:10.406587  624315 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1202 16:18:10.406681  624315 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 16:18:10.473785  624315 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 16:18:10.509892  624315 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 16:18:10.565788  624315 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 16:18:10.713405  624315 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 16:18:10.837791  624315 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 16:18:10.838222  624315 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 16:18:10.844246  624315 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1202 16:18:09.691754  617021 pod_ready.go:104] pod "coredns-66bc5c9577-6h6nr" is not "Ready", error: <nil>
	I1202 16:18:11.692782  617021 pod_ready.go:94] pod "coredns-66bc5c9577-6h6nr" is "Ready"
	I1202 16:18:11.692815  617021 pod_ready.go:86] duration metric: took 37.507156807s for pod "coredns-66bc5c9577-6h6nr" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:11.696097  617021 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-806420" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:11.700661  617021 pod_ready.go:94] pod "etcd-default-k8s-diff-port-806420" is "Ready"
	I1202 16:18:11.700696  617021 pod_ready.go:86] duration metric: took 4.57279ms for pod "etcd-default-k8s-diff-port-806420" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:11.702761  617021 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-806420" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:11.707235  617021 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-806420" is "Ready"
	I1202 16:18:11.707259  617021 pod_ready.go:86] duration metric: took 4.477641ms for pod "kube-apiserver-default-k8s-diff-port-806420" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:11.710403  617021 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-806420" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:11.889880  617021 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-806420" is "Ready"
	I1202 16:18:11.889915  617021 pod_ready.go:86] duration metric: took 179.45256ms for pod "kube-controller-manager-default-k8s-diff-port-806420" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:12.090400  617021 pod_ready.go:83] waiting for pod "kube-proxy-574km" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:12.490369  617021 pod_ready.go:94] pod "kube-proxy-574km" is "Ready"
	I1202 16:18:12.490399  617021 pod_ready.go:86] duration metric: took 399.934021ms for pod "kube-proxy-574km" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:10.846051  624315 out.go:252]   - Booting up control plane ...
	I1202 16:18:10.846198  624315 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 16:18:10.846293  624315 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 16:18:10.847182  624315 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 16:18:10.861199  624315 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 16:18:10.861316  624315 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 16:18:10.868121  624315 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 16:18:10.868349  624315 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 16:18:10.868404  624315 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 16:18:10.971946  624315 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 16:18:10.972060  624315 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 16:18:11.473760  624315 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 502.060862ms
	I1202 16:18:11.478484  624315 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1202 16:18:11.478573  624315 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1202 16:18:11.478669  624315 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1202 16:18:11.478738  624315 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1202 16:18:12.484398  624315 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.005746987s
	I1202 16:18:12.691533  617021 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-806420" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:13.090663  617021 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-806420" is "Ready"
	I1202 16:18:13.090696  617021 pod_ready.go:86] duration metric: took 399.134187ms for pod "kube-scheduler-default-k8s-diff-port-806420" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:13.090710  617021 pod_ready.go:40] duration metric: took 38.908912326s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 16:18:13.137409  617021 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1202 16:18:13.139493  617021 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-806420" cluster and "default" namespace by default
	I1202 16:18:13.031004  624315 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.552408562s
	I1202 16:18:14.979810  624315 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501228669s
	I1202 16:18:14.998836  624315 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1202 16:18:15.009002  624315 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1202 16:18:15.018325  624315 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1202 16:18:15.018557  624315 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-682353 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1202 16:18:15.026607  624315 kubeadm.go:319] [bootstrap-token] Using token: 8ssxbw.m6ls5tgd8f1crjpp
	I1202 16:18:15.027945  624315 out.go:252]   - Configuring RBAC rules ...
	I1202 16:18:15.028111  624315 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1202 16:18:15.032080  624315 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1202 16:18:15.036812  624315 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1202 16:18:15.039329  624315 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1202 16:18:15.041644  624315 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1202 16:18:15.044054  624315 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1202 16:18:15.386977  624315 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1202 16:18:15.803838  624315 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1202 16:18:16.386472  624315 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1202 16:18:16.387384  624315 kubeadm.go:319] 
	I1202 16:18:16.387505  624315 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1202 16:18:16.387525  624315 kubeadm.go:319] 
	I1202 16:18:16.387625  624315 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1202 16:18:16.387634  624315 kubeadm.go:319] 
	I1202 16:18:16.387663  624315 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1202 16:18:16.387746  624315 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1202 16:18:16.387813  624315 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1202 16:18:16.387821  624315 kubeadm.go:319] 
	I1202 16:18:16.387891  624315 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1202 16:18:16.387900  624315 kubeadm.go:319] 
	I1202 16:18:16.387976  624315 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1202 16:18:16.387993  624315 kubeadm.go:319] 
	I1202 16:18:16.388066  624315 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1202 16:18:16.388174  624315 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1202 16:18:16.388272  624315 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1202 16:18:16.388281  624315 kubeadm.go:319] 
	I1202 16:18:16.388408  624315 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1202 16:18:16.388542  624315 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1202 16:18:16.388552  624315 kubeadm.go:319] 
	I1202 16:18:16.388679  624315 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 8ssxbw.m6ls5tgd8f1crjpp \
	I1202 16:18:16.388808  624315 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a700026e2fe1634919809d9050f2aa4b3e0ccbee543d4881e1cd695d56e7eef6 \
	I1202 16:18:16.388847  624315 kubeadm.go:319] 	--control-plane 
	I1202 16:18:16.388856  624315 kubeadm.go:319] 
	I1202 16:18:16.388968  624315 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1202 16:18:16.388977  624315 kubeadm.go:319] 
	I1202 16:18:16.389085  624315 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 8ssxbw.m6ls5tgd8f1crjpp \
	I1202 16:18:16.389209  624315 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a700026e2fe1634919809d9050f2aa4b3e0ccbee543d4881e1cd695d56e7eef6 
	I1202 16:18:16.391629  624315 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1202 16:18:16.391734  624315 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 16:18:16.391766  624315 cni.go:84] Creating CNI manager for ""
	I1202 16:18:16.391776  624315 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 16:18:16.393985  624315 out.go:179] * Configuring CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Dec 02 16:17:33 embed-certs-046271 crio[569]: time="2025-12-02T16:17:33.613398583Z" level=info msg="Started container" PID=1731 containerID=4e3969029638e659ef754301ede73fb0488601bf95a80b6e49bd77a95e8d801f description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lwffd/kubernetes-dashboard id=7f13f7a0-696b-4fde-b464-e402cc27c3f6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ddd40757db9a700cb7416ee34aad7731e590c63887fefd84fbf1efe7973f0f10
	Dec 02 16:17:33 embed-certs-046271 crio[569]: time="2025-12-02T16:17:33.616338458Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 02 16:17:33 embed-certs-046271 crio[569]: time="2025-12-02T16:17:33.616361897Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 02 16:17:50 embed-certs-046271 crio[569]: time="2025-12-02T16:17:50.833468055Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=407636f1-9625-45ca-94ba-94a63cf3a387 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:17:50 embed-certs-046271 crio[569]: time="2025-12-02T16:17:50.834605237Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d7c6eb9c-b060-4b24-b8f5-1ad01db46be9 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:17:50 embed-certs-046271 crio[569]: time="2025-12-02T16:17:50.835694516Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-w9gp4/dashboard-metrics-scraper" id=caa9491e-215b-401c-9b81-433ff808445f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:17:50 embed-certs-046271 crio[569]: time="2025-12-02T16:17:50.83582601Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:17:50 embed-certs-046271 crio[569]: time="2025-12-02T16:17:50.842114747Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:17:50 embed-certs-046271 crio[569]: time="2025-12-02T16:17:50.843276183Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:17:50 embed-certs-046271 crio[569]: time="2025-12-02T16:17:50.872038547Z" level=info msg="Created container 5e28f7fe26c307fa8fc13eefd2850928dc7490333d33240afbfe0450c17515a2: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-w9gp4/dashboard-metrics-scraper" id=caa9491e-215b-401c-9b81-433ff808445f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:17:50 embed-certs-046271 crio[569]: time="2025-12-02T16:17:50.87281716Z" level=info msg="Starting container: 5e28f7fe26c307fa8fc13eefd2850928dc7490333d33240afbfe0450c17515a2" id=9b0f62a1-ec7c-4c1c-bf89-8b2873d23d4e name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 16:17:50 embed-certs-046271 crio[569]: time="2025-12-02T16:17:50.874814783Z" level=info msg="Started container" PID=1793 containerID=5e28f7fe26c307fa8fc13eefd2850928dc7490333d33240afbfe0450c17515a2 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-w9gp4/dashboard-metrics-scraper id=9b0f62a1-ec7c-4c1c-bf89-8b2873d23d4e name=/runtime.v1.RuntimeService/StartContainer sandboxID=d270de3ac8ec8c28a7dde9ad93f4f6cca0089c323450254e5bed447e89767289
	Dec 02 16:17:50 embed-certs-046271 crio[569]: time="2025-12-02T16:17:50.954200997Z" level=info msg="Removing container: 0245db3ef9f53d0236ab142102478c2d2edfe1efed1d7645d048ee65025a225c" id=78412a44-f31d-4484-bebe-020ca987f11f name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 16:17:50 embed-certs-046271 crio[569]: time="2025-12-02T16:17:50.965138722Z" level=info msg="Removed container 0245db3ef9f53d0236ab142102478c2d2edfe1efed1d7645d048ee65025a225c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-w9gp4/dashboard-metrics-scraper" id=78412a44-f31d-4484-bebe-020ca987f11f name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 16:18:13 embed-certs-046271 crio[569]: time="2025-12-02T16:18:13.833409033Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=25d0e1f8-746e-4a03-80c3-582e022f9ad3 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:18:13 embed-certs-046271 crio[569]: time="2025-12-02T16:18:13.834465222Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=7fab009a-ca8e-43f3-b43b-b89b0ae08382 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:18:13 embed-certs-046271 crio[569]: time="2025-12-02T16:18:13.835734984Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-w9gp4/dashboard-metrics-scraper" id=8df0a2b4-0875-4371-afdc-a2b466ad015d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:18:13 embed-certs-046271 crio[569]: time="2025-12-02T16:18:13.835883888Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:18:13 embed-certs-046271 crio[569]: time="2025-12-02T16:18:13.843398764Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:18:13 embed-certs-046271 crio[569]: time="2025-12-02T16:18:13.843978795Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:18:13 embed-certs-046271 crio[569]: time="2025-12-02T16:18:13.87539387Z" level=info msg="Created container 286ad93e33bb4ff3d193a144de9658655d06b742be65e682ec9fa7e8d5f3a8f4: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-w9gp4/dashboard-metrics-scraper" id=8df0a2b4-0875-4371-afdc-a2b466ad015d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:18:13 embed-certs-046271 crio[569]: time="2025-12-02T16:18:13.876056847Z" level=info msg="Starting container: 286ad93e33bb4ff3d193a144de9658655d06b742be65e682ec9fa7e8d5f3a8f4" id=8e87dac6-f9fc-4583-9e51-b4546534f1d3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 16:18:13 embed-certs-046271 crio[569]: time="2025-12-02T16:18:13.878349939Z" level=info msg="Started container" PID=1833 containerID=286ad93e33bb4ff3d193a144de9658655d06b742be65e682ec9fa7e8d5f3a8f4 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-w9gp4/dashboard-metrics-scraper id=8e87dac6-f9fc-4583-9e51-b4546534f1d3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d270de3ac8ec8c28a7dde9ad93f4f6cca0089c323450254e5bed447e89767289
	Dec 02 16:18:14 embed-certs-046271 crio[569]: time="2025-12-02T16:18:14.018846445Z" level=info msg="Removing container: 5e28f7fe26c307fa8fc13eefd2850928dc7490333d33240afbfe0450c17515a2" id=8f85be42-9d36-4c5b-b2a5-3e58308e12e7 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 16:18:14 embed-certs-046271 crio[569]: time="2025-12-02T16:18:14.031873438Z" level=info msg="Removed container 5e28f7fe26c307fa8fc13eefd2850928dc7490333d33240afbfe0450c17515a2: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-w9gp4/dashboard-metrics-scraper" id=8f85be42-9d36-4c5b-b2a5-3e58308e12e7 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	286ad93e33bb4       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           3 seconds ago       Exited              dashboard-metrics-scraper   3                   d270de3ac8ec8       dashboard-metrics-scraper-6ffb444bf9-w9gp4   kubernetes-dashboard
	4e3969029638e       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago      Running             kubernetes-dashboard        0                   ddd40757db9a7       kubernetes-dashboard-855c9754f9-lwffd        kubernetes-dashboard
	173719ad1715f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Running             storage-provisioner         1                   435347a1882eb       storage-provisioner                          kube-system
	1442f4434464d       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   48f24064849e8       busybox                                      default
	7a01cce632ce4       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           54 seconds ago      Running             coredns                     0                   8eb2b33b7a99c       coredns-66bc5c9577-f2vhx                     kube-system
	d5c5bd19a7977       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   9684efb97ce9a       kindnet-wpj6k                                kube-system
	378f281936523       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   435347a1882eb       storage-provisioner                          kube-system
	b7bbe4338eaa7       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           54 seconds ago      Running             kube-proxy                  0                   8b83459aa1591       kube-proxy-q9pxb                             kube-system
	c2276216c1487       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           57 seconds ago      Running             kube-scheduler              0                   ed2499503659a       kube-scheduler-embed-certs-046271            kube-system
	16ad18068e5f3       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           57 seconds ago      Running             etcd                        0                   1349215b79b4b       etcd-embed-certs-046271                      kube-system
	3bf3111e94363       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           57 seconds ago      Running             kube-apiserver              0                   0975481c1d9dd       kube-apiserver-embed-certs-046271            kube-system
	698ef956828ff       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           57 seconds ago      Running             kube-controller-manager     0                   2dd3ccb60e646       kube-controller-manager-embed-certs-046271   kube-system
	
	
	==> coredns [7a01cce632ce473b9f13d211e46baeb90d009c8a9471ff0c1ed098c62fef035b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59409 - 55555 "HINFO IN 8696943024032006366.2951031596223566118. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.022692505s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-046271
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-046271
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=embed-certs-046271
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T16_16_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 16:16:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-046271
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 16:18:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 16:17:53 +0000   Tue, 02 Dec 2025 16:16:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 16:17:53 +0000   Tue, 02 Dec 2025 16:16:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 16:17:53 +0000   Tue, 02 Dec 2025 16:16:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 16:17:53 +0000   Tue, 02 Dec 2025 16:16:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-046271
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                e2b6e9a3-1779-45e2-a9a6-d48b0dea91ba
	  Boot ID:                    e00bac56-b076-4861-bc22-5d3b11269f73
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-f2vhx                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     108s
	  kube-system                 etcd-embed-certs-046271                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-wpj6k                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-embed-certs-046271             250m (3%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-embed-certs-046271    200m (2%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-q9pxb                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-embed-certs-046271             100m (1%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-w9gp4    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-lwffd         0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 107s               kube-proxy       
	  Normal  Starting                 54s                kube-proxy       
	  Normal  Starting                 114s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  113s               kubelet          Node embed-certs-046271 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s               kubelet          Node embed-certs-046271 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s               kubelet          Node embed-certs-046271 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           109s               node-controller  Node embed-certs-046271 event: Registered Node embed-certs-046271 in Controller
	  Normal  NodeReady                97s                kubelet          Node embed-certs-046271 status is now: NodeReady
	  Normal  Starting                 58s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 58s)  kubelet          Node embed-certs-046271 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)  kubelet          Node embed-certs-046271 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 58s)  kubelet          Node embed-certs-046271 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           52s                node-controller  Node embed-certs-046271 event: Registered Node embed-certs-046271 in Controller
	
	
	==> dmesg <==
	[  +0.000023] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[Dec 2 16:14] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ca bc 15 8e 4f 39 08 06
	[  +0.202375] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4a 25 86 21 45 76 08 06
	[  +7.441346] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 50 97 74 77 f9 08 06
	[  +0.000311] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 8c 8a 4d de f7 08 06
	[Dec 2 16:15] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 87 56 d2 46 1b 08 06
	[  +0.000909] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4a 25 86 21 45 76 08 06
	[  +7.449328] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a 06 ef 04 0a 22 08 06
	[ +17.731920] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff ae 8e 5c 48 83 60 08 06
	[  +2.165442] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0e 0b db fb 54 af 08 06
	[  +0.000320] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 3a 06 ef 04 0a 22 08 06
	[ +14.651928] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 5d 2d 15 78 ec 08 06
	[  +0.000385] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 8e 5c 48 83 60 08 06
	
	
	==> etcd [16ad18068e5f3c997cc9fd8d07b82668917afab1c7be18e0282d7eaaa341d8c1] <==
	{"level":"warn","ts":"2025-12-02T16:17:21.366392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:21.376476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:21.384355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:21.392857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:21.400199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:21.408295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:21.416039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:21.423030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:21.431920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:21.456982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:21.463989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:21.471521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:21.479185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:21.486053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:21.497820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:21.505835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:21.513318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:21.519857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:21.528468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:21.535807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:21.556808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:21.563624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:21.570243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:21.615593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48328","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-02T16:18:13.539717Z","caller":"traceutil/trace.go:172","msg":"trace[681584741] transaction","detail":"{read_only:false; response_revision:648; number_of_response:1; }","duration":"151.034126ms","start":"2025-12-02T16:18:13.388649Z","end":"2025-12-02T16:18:13.539683Z","steps":["trace[681584741] 'process raft request'  (duration: 63.041073ms)","trace[681584741] 'compare'  (duration: 87.805684ms)"],"step_count":2}
	
	
	==> kernel <==
	 16:18:17 up  3:00,  0 user,  load average: 3.68, 4.02, 2.72
	Linux embed-certs-046271 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d5c5bd19a797757b7b8b4e7b9bd03cab31a65007d9e44dc873e73b592003f935] <==
	I1202 16:17:23.479536       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1202 16:17:23.479763       1 main.go:148] setting mtu 1500 for CNI 
	I1202 16:17:23.479792       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 16:17:23.479822       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T16:17:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 16:17:23.588065       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 16:17:23.588117       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 16:17:23.588134       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 16:17:23.588616       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1202 16:17:23.603866       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1202 16:17:23.604022       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1202 16:17:25.088531       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 16:17:25.088575       1 metrics.go:72] Registering metrics
	I1202 16:17:25.088714       1 controller.go:711] "Syncing nftables rules"
	I1202 16:17:33.588498       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1202 16:17:33.588577       1 main.go:301] handling current node
	I1202 16:17:43.592501       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1202 16:17:43.592535       1 main.go:301] handling current node
	I1202 16:17:53.588921       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1202 16:17:53.588956       1 main.go:301] handling current node
	I1202 16:18:03.589504       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1202 16:18:03.589556       1 main.go:301] handling current node
	I1202 16:18:13.594619       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1202 16:18:13.594651       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3bf3111e9436304c788bb6ef52a85daf72acb7556f1bd1e4dbd20f1c48b40884] <==
	I1202 16:17:22.109797       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1202 16:17:22.109811       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1202 16:17:22.107629       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1202 16:17:22.109854       1 aggregator.go:171] initial CRD sync complete...
	I1202 16:17:22.109862       1 autoregister_controller.go:144] Starting autoregister controller
	I1202 16:17:22.109867       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1202 16:17:22.109873       1 cache.go:39] Caches are synced for autoregister controller
	I1202 16:17:22.120755       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1202 16:17:22.120915       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1202 16:17:22.121555       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1202 16:17:22.133023       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1202 16:17:22.133139       1 policy_source.go:240] refreshing policies
	I1202 16:17:22.151685       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 16:17:22.172608       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 16:17:22.430253       1 controller.go:667] quota admission added evaluator for: namespaces
	I1202 16:17:22.459542       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1202 16:17:22.480151       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 16:17:22.489086       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 16:17:22.496276       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 16:17:22.540900       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.164.34"}
	I1202 16:17:22.554641       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.214.15"}
	I1202 16:17:23.009784       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1202 16:17:25.510883       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 16:17:25.660716       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 16:17:25.961116       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [698ef956828ff2ca307684a986b76f4d7810277a835e5153b0a6cfc108ff4852] <==
	I1202 16:17:25.416930       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1202 16:17:25.418096       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1202 16:17:25.420316       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1202 16:17:25.422596       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1202 16:17:25.432932       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1202 16:17:25.439189       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1202 16:17:25.440375       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1202 16:17:25.454833       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 16:17:25.457525       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1202 16:17:25.457538       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1202 16:17:25.457566       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1202 16:17:25.457596       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1202 16:17:25.457636       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1202 16:17:25.457677       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1202 16:17:25.457746       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1202 16:17:25.459316       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1202 16:17:25.461561       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 16:17:25.461589       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1202 16:17:25.463394       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1202 16:17:25.463477       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1202 16:17:25.463553       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-046271"
	I1202 16:17:25.463598       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1202 16:17:25.464916       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1202 16:17:25.467215       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1202 16:17:25.482159       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [b7bbe4338eaa713a9f46532f0d5f4f8fdd4e7eb320af43e5d146a44067c124a7] <==
	I1202 16:17:23.264149       1 server_linux.go:53] "Using iptables proxy"
	I1202 16:17:23.346502       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 16:17:23.447539       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 16:17:23.447572       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1202 16:17:23.447686       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 16:17:23.466837       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 16:17:23.466899       1 server_linux.go:132] "Using iptables Proxier"
	I1202 16:17:23.472596       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 16:17:23.472997       1 server.go:527] "Version info" version="v1.34.2"
	I1202 16:17:23.473092       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 16:17:23.474177       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 16:17:23.474204       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 16:17:23.474298       1 config.go:200] "Starting service config controller"
	I1202 16:17:23.474314       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 16:17:23.474329       1 config.go:106] "Starting endpoint slice config controller"
	I1202 16:17:23.474342       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 16:17:23.474469       1 config.go:309] "Starting node config controller"
	I1202 16:17:23.474502       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 16:17:23.474511       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 16:17:23.574383       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 16:17:23.574442       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1202 16:17:23.574393       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [c2276216c1487f93e3277d2422350dd969b6a1c3c3470ca0ad9cf54e25deb70f] <==
	I1202 16:17:20.856151       1 serving.go:386] Generated self-signed cert in-memory
	W1202 16:17:22.052526       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1202 16:17:22.052591       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1202 16:17:22.052607       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1202 16:17:22.052617       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1202 16:17:22.103189       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1202 16:17:22.103227       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 16:17:22.106509       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1202 16:17:22.106975       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 16:17:22.107006       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 16:17:22.107033       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1202 16:17:22.207562       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 02 16:17:28 embed-certs-046271 kubelet[735]: I1202 16:17:28.880514     735 scope.go:117] "RemoveContainer" containerID="c6fbffd6fec06441cb182491663ecf598571f5c14a1754fae8d923edbb49ba79"
	Dec 02 16:17:29 embed-certs-046271 kubelet[735]: I1202 16:17:29.886239     735 scope.go:117] "RemoveContainer" containerID="c6fbffd6fec06441cb182491663ecf598571f5c14a1754fae8d923edbb49ba79"
	Dec 02 16:17:29 embed-certs-046271 kubelet[735]: I1202 16:17:29.886462     735 scope.go:117] "RemoveContainer" containerID="0245db3ef9f53d0236ab142102478c2d2edfe1efed1d7645d048ee65025a225c"
	Dec 02 16:17:29 embed-certs-046271 kubelet[735]: E1202 16:17:29.886672     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-w9gp4_kubernetes-dashboard(2f530710-0c0f-478b-b034-2ca8725a5222)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-w9gp4" podUID="2f530710-0c0f-478b-b034-2ca8725a5222"
	Dec 02 16:17:30 embed-certs-046271 kubelet[735]: I1202 16:17:30.588693     735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 02 16:17:30 embed-certs-046271 kubelet[735]: I1202 16:17:30.894616     735 scope.go:117] "RemoveContainer" containerID="0245db3ef9f53d0236ab142102478c2d2edfe1efed1d7645d048ee65025a225c"
	Dec 02 16:17:30 embed-certs-046271 kubelet[735]: E1202 16:17:30.897812     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-w9gp4_kubernetes-dashboard(2f530710-0c0f-478b-b034-2ca8725a5222)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-w9gp4" podUID="2f530710-0c0f-478b-b034-2ca8725a5222"
	Dec 02 16:17:33 embed-certs-046271 kubelet[735]: I1202 16:17:33.920150     735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lwffd" podStartSLOduration=0.712241732 podStartE2EDuration="7.920128512s" podCreationTimestamp="2025-12-02 16:17:26 +0000 UTC" firstStartedPulling="2025-12-02 16:17:26.359496702 +0000 UTC m=+6.621150412" lastFinishedPulling="2025-12-02 16:17:33.567383486 +0000 UTC m=+13.829037192" observedRunningTime="2025-12-02 16:17:33.920096314 +0000 UTC m=+14.181750037" watchObservedRunningTime="2025-12-02 16:17:33.920128512 +0000 UTC m=+14.181782235"
	Dec 02 16:17:38 embed-certs-046271 kubelet[735]: I1202 16:17:38.845102     735 scope.go:117] "RemoveContainer" containerID="0245db3ef9f53d0236ab142102478c2d2edfe1efed1d7645d048ee65025a225c"
	Dec 02 16:17:38 embed-certs-046271 kubelet[735]: E1202 16:17:38.845353     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-w9gp4_kubernetes-dashboard(2f530710-0c0f-478b-b034-2ca8725a5222)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-w9gp4" podUID="2f530710-0c0f-478b-b034-2ca8725a5222"
	Dec 02 16:17:50 embed-certs-046271 kubelet[735]: I1202 16:17:50.832757     735 scope.go:117] "RemoveContainer" containerID="0245db3ef9f53d0236ab142102478c2d2edfe1efed1d7645d048ee65025a225c"
	Dec 02 16:17:50 embed-certs-046271 kubelet[735]: I1202 16:17:50.952717     735 scope.go:117] "RemoveContainer" containerID="0245db3ef9f53d0236ab142102478c2d2edfe1efed1d7645d048ee65025a225c"
	Dec 02 16:17:50 embed-certs-046271 kubelet[735]: I1202 16:17:50.952935     735 scope.go:117] "RemoveContainer" containerID="5e28f7fe26c307fa8fc13eefd2850928dc7490333d33240afbfe0450c17515a2"
	Dec 02 16:17:50 embed-certs-046271 kubelet[735]: E1202 16:17:50.953166     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-w9gp4_kubernetes-dashboard(2f530710-0c0f-478b-b034-2ca8725a5222)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-w9gp4" podUID="2f530710-0c0f-478b-b034-2ca8725a5222"
	Dec 02 16:17:58 embed-certs-046271 kubelet[735]: I1202 16:17:58.845593     735 scope.go:117] "RemoveContainer" containerID="5e28f7fe26c307fa8fc13eefd2850928dc7490333d33240afbfe0450c17515a2"
	Dec 02 16:17:58 embed-certs-046271 kubelet[735]: E1202 16:17:58.845949     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-w9gp4_kubernetes-dashboard(2f530710-0c0f-478b-b034-2ca8725a5222)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-w9gp4" podUID="2f530710-0c0f-478b-b034-2ca8725a5222"
	Dec 02 16:18:13 embed-certs-046271 kubelet[735]: I1202 16:18:13.832916     735 scope.go:117] "RemoveContainer" containerID="5e28f7fe26c307fa8fc13eefd2850928dc7490333d33240afbfe0450c17515a2"
	Dec 02 16:18:14 embed-certs-046271 kubelet[735]: I1202 16:18:14.017399     735 scope.go:117] "RemoveContainer" containerID="5e28f7fe26c307fa8fc13eefd2850928dc7490333d33240afbfe0450c17515a2"
	Dec 02 16:18:14 embed-certs-046271 kubelet[735]: I1202 16:18:14.017653     735 scope.go:117] "RemoveContainer" containerID="286ad93e33bb4ff3d193a144de9658655d06b742be65e682ec9fa7e8d5f3a8f4"
	Dec 02 16:18:14 embed-certs-046271 kubelet[735]: E1202 16:18:14.017864     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-w9gp4_kubernetes-dashboard(2f530710-0c0f-478b-b034-2ca8725a5222)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-w9gp4" podUID="2f530710-0c0f-478b-b034-2ca8725a5222"
	Dec 02 16:18:14 embed-certs-046271 kubelet[735]: I1202 16:18:14.423495     735 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 02 16:18:14 embed-certs-046271 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 02 16:18:14 embed-certs-046271 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 02 16:18:14 embed-certs-046271 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 16:18:14 embed-certs-046271 systemd[1]: kubelet.service: Consumed 1.838s CPU time.
	
	
	==> kubernetes-dashboard [4e3969029638e659ef754301ede73fb0488601bf95a80b6e49bd77a95e8d801f] <==
	2025/12/02 16:17:33 Using namespace: kubernetes-dashboard
	2025/12/02 16:17:33 Using in-cluster config to connect to apiserver
	2025/12/02 16:17:33 Using secret token for csrf signing
	2025/12/02 16:17:33 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/02 16:17:33 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/02 16:17:33 Successful initial request to the apiserver, version: v1.34.2
	2025/12/02 16:17:33 Generating JWE encryption key
	2025/12/02 16:17:33 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/02 16:17:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/02 16:17:33 Initializing JWE encryption key from synchronized object
	2025/12/02 16:17:33 Creating in-cluster Sidecar client
	2025/12/02 16:17:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/02 16:17:33 Serving insecurely on HTTP port: 9090
	2025/12/02 16:18:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/02 16:17:33 Starting overwatch
	
	
	==> storage-provisioner [173719ad1715f37da7c13d636a2be3e7910afc81ad6d92a88ab2d5268e0b4ad0] <==
	W1202 16:17:53.421019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:55.424687       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:55.429458       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:57.432456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:57.436401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:59.440248       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:59.444219       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:01.447467       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:01.451171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:03.455096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:03.461085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:05.464594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:05.469108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:07.473124       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:07.478194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:09.481336       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:09.486307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:11.489802       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:11.493570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:13.540841       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:13.562982       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:15.567234       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:15.572691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:17.576290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:17.580294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [378f281936523058e29cebe67cdf6b667293a6726f2577c5653947947d6210ab] <==
	I1202 16:17:23.227310       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1202 16:17:23.232071       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-046271 -n embed-certs-046271
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-046271 -n embed-certs-046271: exit status 2 (347.678524ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-046271 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-046271
helpers_test.go:243: (dbg) docker inspect embed-certs-046271:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c270563500586ca0092f2a49edbeca9b376c5254a06f29ac0e88ce01fd93d310",
	        "Created": "2025-12-02T16:16:07.197943832Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 615383,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T16:17:13.655002956Z",
	            "FinishedAt": "2025-12-02T16:17:11.235361652Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/c270563500586ca0092f2a49edbeca9b376c5254a06f29ac0e88ce01fd93d310/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c270563500586ca0092f2a49edbeca9b376c5254a06f29ac0e88ce01fd93d310/hostname",
	        "HostsPath": "/var/lib/docker/containers/c270563500586ca0092f2a49edbeca9b376c5254a06f29ac0e88ce01fd93d310/hosts",
	        "LogPath": "/var/lib/docker/containers/c270563500586ca0092f2a49edbeca9b376c5254a06f29ac0e88ce01fd93d310/c270563500586ca0092f2a49edbeca9b376c5254a06f29ac0e88ce01fd93d310-json.log",
	        "Name": "/embed-certs-046271",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-046271:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-046271",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c270563500586ca0092f2a49edbeca9b376c5254a06f29ac0e88ce01fd93d310",
	                "LowerDir": "/var/lib/docker/overlay2/12561fe812efc0a7100c7e89c65c08692ffbc64b594cbfd37abbc22239f7f12c-init/diff:/var/lib/docker/overlay2/ab98578cee54140c21ba2edb7c02601b9799fbaa027f05ce4daaae66d198c082/diff",
	                "MergedDir": "/var/lib/docker/overlay2/12561fe812efc0a7100c7e89c65c08692ffbc64b594cbfd37abbc22239f7f12c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/12561fe812efc0a7100c7e89c65c08692ffbc64b594cbfd37abbc22239f7f12c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/12561fe812efc0a7100c7e89c65c08692ffbc64b594cbfd37abbc22239f7f12c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-046271",
	                "Source": "/var/lib/docker/volumes/embed-certs-046271/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-046271",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-046271",
	                "name.minikube.sigs.k8s.io": "embed-certs-046271",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "bd9aaae1bf41337c86ebcb0add389c81ee2e208ffc07ac37f577647496cd92ce",
	            "SandboxKey": "/var/run/docker/netns/bd9aaae1bf41",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33250"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33251"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33254"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33252"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33253"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-046271": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f242ea03e26ef86f9adac97f285055eeae57f7a447eb51a12604c316daba1ca0",
	                    "EndpointID": "1815b42361b99c183011d80cd98a51a0e7a8e723b177c92e069c4f2e724dfbb0",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "86:08:0a:24:63:8a",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-046271",
	                        "c27056350058"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-046271 -n embed-certs-046271
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-046271 -n embed-certs-046271: exit status 2 (344.306758ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-046271 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-046271 logs -n 25: (1.18169033s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable metrics-server -p no-preload-534842 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	│ stop    │ -p no-preload-534842 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-380588 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ start   │ -p old-k8s-version-380588 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:17 UTC │
	│ addons  │ enable dashboard -p no-preload-534842 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ start   │ -p no-preload-534842 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:17 UTC │
	│ addons  │ enable metrics-server -p embed-certs-046271 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	│ stop    │ -p embed-certs-046271 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:17 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-806420 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-806420 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-046271 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ start   │ -p embed-certs-046271 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:18 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-806420 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ start   │ -p default-k8s-diff-port-806420 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:18 UTC │
	│ image   │ old-k8s-version-380588 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ pause   │ -p old-k8s-version-380588 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │                     │
	│ image   │ no-preload-534842 image list --format=json                                                                                                                                                                                                           │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ pause   │ -p no-preload-534842 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │                     │
	│ delete  │ -p old-k8s-version-380588                                                                                                                                                                                                                            │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ delete  │ -p no-preload-534842                                                                                                                                                                                                                                 │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ delete  │ -p old-k8s-version-380588                                                                                                                                                                                                                            │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ start   │ -p newest-cni-682353 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-682353            │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │                     │
	│ delete  │ -p no-preload-534842                                                                                                                                                                                                                                 │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ image   │ embed-certs-046271 image list --format=json                                                                                                                                                                                                          │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │ 02 Dec 25 16:18 UTC │
	│ pause   │ -p embed-certs-046271 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 16:17:52
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 16:17:52.644799  624315 out.go:360] Setting OutFile to fd 1 ...
	I1202 16:17:52.644911  624315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:17:52.644919  624315 out.go:374] Setting ErrFile to fd 2...
	I1202 16:17:52.644923  624315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:17:52.645119  624315 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 16:17:52.645660  624315 out.go:368] Setting JSON to false
	I1202 16:17:52.646996  624315 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":10814,"bootTime":1764681459,"procs":344,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 16:17:52.647061  624315 start.go:143] virtualization: kvm guest
	I1202 16:17:52.649119  624315 out.go:179] * [newest-cni-682353] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 16:17:52.650307  624315 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 16:17:52.650341  624315 notify.go:221] Checking for updates...
	I1202 16:17:52.652574  624315 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 16:17:52.653891  624315 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 16:17:52.655069  624315 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-264555/.minikube
	I1202 16:17:52.656462  624315 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 16:17:52.658069  624315 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 16:17:52.659797  624315 config.go:182] Loaded profile config "default-k8s-diff-port-806420": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 16:17:52.659881  624315 config.go:182] Loaded profile config "embed-certs-046271": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 16:17:52.659969  624315 config.go:182] Loaded profile config "no-preload-534842": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 16:17:52.660075  624315 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 16:17:52.686164  624315 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 16:17:52.686294  624315 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 16:17:52.756277  624315 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-02 16:17:52.744065656 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 16:17:52.756384  624315 docker.go:319] overlay module found
	I1202 16:17:52.760989  624315 out.go:179] * Using the docker driver based on user configuration
	I1202 16:17:52.762385  624315 start.go:309] selected driver: docker
	I1202 16:17:52.762402  624315 start.go:927] validating driver "docker" against <nil>
	I1202 16:17:52.762413  624315 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 16:17:52.762997  624315 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 16:17:52.838284  624315 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-12-02 16:17:52.827987453 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 16:17:52.838514  624315 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1202 16:17:52.838550  624315 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1202 16:17:52.838830  624315 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1202 16:17:52.844583  624315 out.go:179] * Using Docker driver with root privileges
	I1202 16:17:52.845732  624315 cni.go:84] Creating CNI manager for ""
	I1202 16:17:52.845789  624315 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 16:17:52.845805  624315 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 16:17:52.845898  624315 start.go:353] cluster config:
	{Name:newest-cni-682353 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-682353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 16:17:52.847341  624315 out.go:179] * Starting "newest-cni-682353" primary control-plane node in "newest-cni-682353" cluster
	I1202 16:17:52.848449  624315 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 16:17:52.850119  624315 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	W1202 16:17:48.735446  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	W1202 16:17:51.235172  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	I1202 16:17:52.851463  624315 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 16:17:52.851567  624315 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 16:17:52.877300  624315 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 16:17:52.877327  624315 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 16:17:53.534100  624315 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	W1202 16:17:53.777545  624315 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1202 16:17:53.777737  624315 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/config.json ...
	I1202 16:17:53.777776  624315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/config.json: {Name:mk1da2fd97ec61d8b0621ec4e77abec4e577dd62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:17:53.777919  624315 cache.go:107] acquiring lock: {Name:mk6b8eeb5270fa67a5a87f892f37de1ae4805f75 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:17:53.777969  624315 cache.go:243] Successfully downloaded all kic artifacts
	I1202 16:17:53.777962  624315 cache.go:107] acquiring lock: {Name:mk3f4d40fdf359ce0573637a386f14c0a310cdc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:17:53.778008  624315 start.go:360] acquireMachinesLock for newest-cni-682353: {Name:mkfed8f02380af59f92aa0b6f8ae02a29dbe0c8e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:17:53.777991  624315 cache.go:107] acquiring lock: {Name:mka2aa325920dfb2720f9036278856e8dac95446 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:17:53.778031  624315 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 16:17:53.778042  624315 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 144.394µs
	I1202 16:17:53.778050  624315 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 16:17:53.778058  624315 start.go:364] duration metric: took 37.681µs to acquireMachinesLock for "newest-cni-682353"
	I1202 16:17:53.778060  624315 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 16:17:53.778062  624315 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 116.303µs
	I1202 16:17:53.778072  624315 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 16:17:53.778078  624315 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 16:17:53.778080  624315 cache.go:107] acquiring lock: {Name:mkce5d795e0ca01a9ee3d674d001cd6e04bbbfba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:17:53.778090  624315 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 120.36µs
	I1202 16:17:53.778124  624315 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 16:17:53.778077  624315 start.go:93] Provisioning new machine with config: &{Name:newest-cni-682353 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-682353 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 16:17:53.778170  624315 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1202 16:17:53.778176  624315 start.go:125] createHost starting for "" (driver="docker")
	I1202 16:17:53.778182  624315 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 105.339µs
	I1202 16:17:53.778196  624315 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 16:17:53.778173  624315 cache.go:107] acquiring lock: {Name:mk91bc91bcc535b3edd8200bf0c06e4d97781487 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:17:53.778225  624315 cache.go:107] acquiring lock: {Name:mk17b77bf762047097cbe060b18dc85ae78a9727 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:17:53.778251  624315 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 16:17:53.778262  624315 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 102.828µs
	I1202 16:17:53.778276  624315 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 16:17:53.778262  624315 cache.go:107] acquiring lock: {Name:mkec45cdfdbdafc0ef1296b9d77662a50add1cdf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:17:53.778298  624315 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 16:17:53.778312  624315 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 111.218µs
	I1202 16:17:53.778316  624315 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 16:17:53.778322  624315 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 16:17:53.778315  624315 cache.go:107] acquiring lock: {Name:mk821cef64e8468a2739d03d2e1019ac980bf2cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:17:53.778326  624315 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 75.311µs
	W1202 16:17:53.692416  617021 pod_ready.go:104] pod "coredns-66bc5c9577-6h6nr" is not "Ready", error: <nil>
	W1202 16:17:56.191595  617021 pod_ready.go:104] pod "coredns-66bc5c9577-6h6nr" is not "Ready", error: <nil>
	I1202 16:17:53.778371  624315 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 16:17:53.778382  624315 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 75.582µs
	I1202 16:17:53.778390  624315 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 16:17:53.778338  624315 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 16:17:53.778410  624315 cache.go:87] Successfully saved all images to host disk.
	I1202 16:17:53.788193  624315 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1202 16:17:53.788477  624315 start.go:159] libmachine.API.Create for "newest-cni-682353" (driver="docker")
	I1202 16:17:53.788525  624315 client.go:173] LocalClient.Create starting
	I1202 16:17:53.788618  624315 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem
	I1202 16:17:53.788655  624315 main.go:143] libmachine: Decoding PEM data...
	I1202 16:17:53.788673  624315 main.go:143] libmachine: Parsing certificate...
	I1202 16:17:53.788729  624315 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem
	I1202 16:17:53.788748  624315 main.go:143] libmachine: Decoding PEM data...
	I1202 16:17:53.788759  624315 main.go:143] libmachine: Parsing certificate...
	I1202 16:17:53.789143  624315 cli_runner.go:164] Run: docker network inspect newest-cni-682353 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1202 16:17:53.809974  624315 cli_runner.go:211] docker network inspect newest-cni-682353 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1202 16:17:53.810064  624315 network_create.go:284] running [docker network inspect newest-cni-682353] to gather additional debugging logs...
	I1202 16:17:53.810092  624315 cli_runner.go:164] Run: docker network inspect newest-cni-682353
	W1202 16:17:53.830774  624315 cli_runner.go:211] docker network inspect newest-cni-682353 returned with exit code 1
	I1202 16:17:53.830803  624315 network_create.go:287] error running [docker network inspect newest-cni-682353]: docker network inspect newest-cni-682353: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-682353 not found
	I1202 16:17:53.830820  624315 network_create.go:289] output of [docker network inspect newest-cni-682353]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-682353 not found
	
	** /stderr **
	I1202 16:17:53.830928  624315 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 16:17:53.851114  624315 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-59c4d474daec IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:20:cf:7a:79:c5} reservation:<nil>}
	I1202 16:17:53.851815  624315 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-208582b1a4af IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:26:5b:fe:2d:46:75} reservation:<nil>}
	I1202 16:17:53.852643  624315 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-230a00bd70ce IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:fe:8f:10:7f:8e:d3} reservation:<nil>}
	I1202 16:17:53.853252  624315 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-f242ea03e26e IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:3a:4d:9d:95:a5:56} reservation:<nil>}
	I1202 16:17:53.853946  624315 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-71c0f0496cc5 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:fa:9c:49:d2:0f:a1} reservation:<nil>}
	I1202 16:17:53.854357  624315 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-26f54f8ab80d IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:22:17:9c:97:61:b0} reservation:<nil>}
	I1202 16:17:53.854995  624315 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0027bef70}
	I1202 16:17:53.855021  624315 network_create.go:124] attempt to create docker network newest-cni-682353 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1202 16:17:53.855077  624315 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-682353 newest-cni-682353
	I1202 16:17:53.916078  624315 network_create.go:108] docker network newest-cni-682353 192.168.103.0/24 created
	I1202 16:17:53.916113  624315 kic.go:121] calculated static IP "192.168.103.2" for the "newest-cni-682353" container
	I1202 16:17:53.916193  624315 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1202 16:17:53.938300  624315 cli_runner.go:164] Run: docker volume create newest-cni-682353 --label name.minikube.sigs.k8s.io=newest-cni-682353 --label created_by.minikube.sigs.k8s.io=true
	I1202 16:17:53.959487  624315 oci.go:103] Successfully created a docker volume newest-cni-682353
	I1202 16:17:53.959565  624315 cli_runner.go:164] Run: docker run --rm --name newest-cni-682353-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-682353 --entrypoint /usr/bin/test -v newest-cni-682353:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1202 16:17:54.413877  624315 oci.go:107] Successfully prepared a docker volume newest-cni-682353
	I1202 16:17:54.413932  624315 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	W1202 16:17:54.414025  624315 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1202 16:17:54.414062  624315 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1202 16:17:54.414111  624315 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1202 16:17:54.479852  624315 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-682353 --name newest-cni-682353 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-682353 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-682353 --network newest-cni-682353 --ip 192.168.103.2 --volume newest-cni-682353:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1202 16:17:54.770265  624315 cli_runner.go:164] Run: docker container inspect newest-cni-682353 --format={{.State.Running}}
	I1202 16:17:54.793036  624315 cli_runner.go:164] Run: docker container inspect newest-cni-682353 --format={{.State.Status}}
	I1202 16:17:54.812286  624315 cli_runner.go:164] Run: docker exec newest-cni-682353 stat /var/lib/dpkg/alternatives/iptables
	I1202 16:17:54.862730  624315 oci.go:144] the created container "newest-cni-682353" has a running status.
	I1202 16:17:54.862790  624315 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22021-264555/.minikube/machines/newest-cni-682353/id_rsa...
	I1202 16:17:55.048010  624315 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22021-264555/.minikube/machines/newest-cni-682353/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1202 16:17:55.089312  624315 cli_runner.go:164] Run: docker container inspect newest-cni-682353 --format={{.State.Status}}
	I1202 16:17:55.111220  624315 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1202 16:17:55.111254  624315 kic_runner.go:114] Args: [docker exec --privileged newest-cni-682353 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1202 16:17:55.162548  624315 cli_runner.go:164] Run: docker container inspect newest-cni-682353 --format={{.State.Status}}
	I1202 16:17:55.185203  624315 machine.go:94] provisionDockerMachine start ...
	I1202 16:17:55.185292  624315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:17:55.206996  624315 main.go:143] libmachine: Using SSH client type: native
	I1202 16:17:55.207255  624315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33260 <nil> <nil>}
	I1202 16:17:55.207290  624315 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 16:17:55.350585  624315 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-682353
	
	I1202 16:17:55.350623  624315 ubuntu.go:182] provisioning hostname "newest-cni-682353"
	I1202 16:17:55.350718  624315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:17:55.371361  624315 main.go:143] libmachine: Using SSH client type: native
	I1202 16:17:55.371720  624315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33260 <nil> <nil>}
	I1202 16:17:55.371746  624315 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-682353 && echo "newest-cni-682353" | sudo tee /etc/hostname
	I1202 16:17:55.527645  624315 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-682353
	
	I1202 16:17:55.527735  624315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:17:55.546178  624315 main.go:143] libmachine: Using SSH client type: native
	I1202 16:17:55.546465  624315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33260 <nil> <nil>}
	I1202 16:17:55.546490  624315 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-682353' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-682353/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-682353' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 16:17:55.688466  624315 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 16:17:55.688504  624315 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-264555/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-264555/.minikube}
	I1202 16:17:55.688529  624315 ubuntu.go:190] setting up certificates
	I1202 16:17:55.688543  624315 provision.go:84] configureAuth start
	I1202 16:17:55.688607  624315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-682353
	I1202 16:17:55.707288  624315 provision.go:143] copyHostCerts
	I1202 16:17:55.707359  624315 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem, removing ...
	I1202 16:17:55.707372  624315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem
	I1202 16:17:55.707483  624315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem (1082 bytes)
	I1202 16:17:55.707608  624315 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem, removing ...
	I1202 16:17:55.707622  624315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem
	I1202 16:17:55.707663  624315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem (1123 bytes)
	I1202 16:17:55.707741  624315 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem, removing ...
	I1202 16:17:55.707750  624315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem
	I1202 16:17:55.707784  624315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem (1675 bytes)
	I1202 16:17:55.707854  624315 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem org=jenkins.newest-cni-682353 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-682353]
	I1202 16:17:55.874758  624315 provision.go:177] copyRemoteCerts
	I1202 16:17:55.874827  624315 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 16:17:55.874887  624315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:17:55.893962  624315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33260 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/newest-cni-682353/id_rsa Username:docker}
	I1202 16:17:55.995114  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 16:17:56.016141  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 16:17:56.034903  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 16:17:56.053997  624315 provision.go:87] duration metric: took 365.435069ms to configureAuth
	I1202 16:17:56.054029  624315 ubuntu.go:206] setting minikube options for container-runtime
	I1202 16:17:56.054223  624315 config.go:182] Loaded profile config "newest-cni-682353": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 16:17:56.054345  624315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:17:56.073989  624315 main.go:143] libmachine: Using SSH client type: native
	I1202 16:17:56.074282  624315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33260 <nil> <nil>}
	I1202 16:17:56.074316  624315 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 16:17:56.371749  624315 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 16:17:56.371776  624315 machine.go:97] duration metric: took 1.186549675s to provisionDockerMachine
	I1202 16:17:56.371787  624315 client.go:176] duration metric: took 2.583251832s to LocalClient.Create
	I1202 16:17:56.371809  624315 start.go:167] duration metric: took 2.58333403s to libmachine.API.Create "newest-cni-682353"
	I1202 16:17:56.371819  624315 start.go:293] postStartSetup for "newest-cni-682353" (driver="docker")
	I1202 16:17:56.371833  624315 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 16:17:56.371892  624315 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 16:17:56.371933  624315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:17:56.393135  624315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33260 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/newest-cni-682353/id_rsa Username:docker}
	I1202 16:17:56.496995  624315 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 16:17:56.501038  624315 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 16:17:56.501074  624315 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 16:17:56.501087  624315 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-264555/.minikube/addons for local assets ...
	I1202 16:17:56.501151  624315 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-264555/.minikube/files for local assets ...
	I1202 16:17:56.501258  624315 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem -> 2680992.pem in /etc/ssl/certs
	I1202 16:17:56.501378  624315 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 16:17:56.509784  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem --> /etc/ssl/certs/2680992.pem (1708 bytes)
	I1202 16:17:56.531088  624315 start.go:296] duration metric: took 159.252805ms for postStartSetup
	I1202 16:17:56.531578  624315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-682353
	I1202 16:17:56.551457  624315 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/config.json ...
	I1202 16:17:56.551748  624315 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 16:17:56.551795  624315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:17:56.569785  624315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33260 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/newest-cni-682353/id_rsa Username:docker}
	I1202 16:17:56.667711  624315 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 16:17:56.672371  624315 start.go:128] duration metric: took 2.894159147s to createHost
	I1202 16:17:56.672398  624315 start.go:83] releasing machines lock for "newest-cni-682353", held for 2.894331813s
	I1202 16:17:56.672480  624315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-682353
	I1202 16:17:56.691026  624315 ssh_runner.go:195] Run: cat /version.json
	I1202 16:17:56.691068  624315 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 16:17:56.691085  624315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:17:56.691142  624315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:17:56.709964  624315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33260 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/newest-cni-682353/id_rsa Username:docker}
	I1202 16:17:56.710241  624315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33260 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/newest-cni-682353/id_rsa Username:docker}
	I1202 16:17:56.863891  624315 ssh_runner.go:195] Run: systemctl --version
	I1202 16:17:56.870804  624315 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 16:17:56.905199  624315 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 16:17:56.910177  624315 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 16:17:56.910251  624315 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 16:17:56.939092  624315 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1202 16:17:56.939115  624315 start.go:496] detecting cgroup driver to use...
	I1202 16:17:56.939145  624315 detect.go:190] detected "systemd" cgroup driver on host os
	I1202 16:17:56.939191  624315 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 16:17:56.955610  624315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 16:17:56.969366  624315 docker.go:218] disabling cri-docker service (if available) ...
	I1202 16:17:56.969454  624315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 16:17:56.986554  624315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 16:17:57.004854  624315 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 16:17:57.093269  624315 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 16:17:57.182306  624315 docker.go:234] disabling docker service ...
	I1202 16:17:57.182377  624315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 16:17:57.203634  624315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 16:17:57.217007  624315 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 16:17:57.302606  624315 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 16:17:57.390234  624315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 16:17:57.403746  624315 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 16:17:57.418993  624315 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 16:17:57.419043  624315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:57.429606  624315 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1202 16:17:57.429677  624315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:57.440021  624315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:57.449681  624315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:57.459146  624315 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 16:17:57.467927  624315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:57.477178  624315 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:57.491151  624315 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:57.501347  624315 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 16:17:57.509339  624315 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 16:17:57.517116  624315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 16:17:57.601755  624315 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 16:17:58.048791  624315 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 16:17:58.048863  624315 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 16:17:58.053242  624315 start.go:564] Will wait 60s for crictl version
	I1202 16:17:58.053300  624315 ssh_runner.go:195] Run: which crictl
	I1202 16:17:58.057049  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 16:17:58.081825  624315 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 16:17:58.081931  624315 ssh_runner.go:195] Run: crio --version
	I1202 16:17:58.110886  624315 ssh_runner.go:195] Run: crio --version
	I1202 16:17:58.141240  624315 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1202 16:17:58.142295  624315 cli_runner.go:164] Run: docker network inspect newest-cni-682353 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 16:17:58.161244  624315 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1202 16:17:58.165842  624315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 16:17:58.178580  624315 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1202 16:17:53.734533  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	W1202 16:17:56.234121  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	W1202 16:17:58.734313  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	I1202 16:18:00.733749  615191 pod_ready.go:94] pod "coredns-66bc5c9577-f2vhx" is "Ready"
	I1202 16:18:00.733779  615191 pod_ready.go:86] duration metric: took 37.005697217s for pod "coredns-66bc5c9577-f2vhx" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:00.736453  615191 pod_ready.go:83] waiting for pod "etcd-embed-certs-046271" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:00.740534  615191 pod_ready.go:94] pod "etcd-embed-certs-046271" is "Ready"
	I1202 16:18:00.740565  615191 pod_ready.go:86] duration metric: took 4.089173ms for pod "etcd-embed-certs-046271" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:00.742727  615191 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-046271" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:00.746692  615191 pod_ready.go:94] pod "kube-apiserver-embed-certs-046271" is "Ready"
	I1202 16:18:00.746764  615191 pod_ready.go:86] duration metric: took 4.006581ms for pod "kube-apiserver-embed-certs-046271" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:00.748680  615191 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-046271" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:00.933918  615191 pod_ready.go:94] pod "kube-controller-manager-embed-certs-046271" is "Ready"
	I1202 16:18:00.933953  615191 pod_ready.go:86] duration metric: took 185.250717ms for pod "kube-controller-manager-embed-certs-046271" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:01.132237  615191 pod_ready.go:83] waiting for pod "kube-proxy-q9pxb" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:01.532378  615191 pod_ready.go:94] pod "kube-proxy-q9pxb" is "Ready"
	I1202 16:18:01.532413  615191 pod_ready.go:86] duration metric: took 400.148403ms for pod "kube-proxy-q9pxb" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:01.732266  615191 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-046271" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:02.132067  615191 pod_ready.go:94] pod "kube-scheduler-embed-certs-046271" is "Ready"
	I1202 16:18:02.132099  615191 pod_ready.go:86] duration metric: took 399.802212ms for pod "kube-scheduler-embed-certs-046271" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:02.132116  615191 pod_ready.go:40] duration metric: took 38.407682171s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 16:18:02.187882  615191 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1202 16:18:02.190701  615191 out.go:179] * Done! kubectl is now configured to use "embed-certs-046271" cluster and "default" namespace by default
	W1202 16:17:58.192329  617021 pod_ready.go:104] pod "coredns-66bc5c9577-6h6nr" is not "Ready", error: <nil>
	W1202 16:18:00.691537  617021 pod_ready.go:104] pod "coredns-66bc5c9577-6h6nr" is not "Ready", error: <nil>
	I1202 16:17:58.179767  624315 kubeadm.go:884] updating cluster {Name:newest-cni-682353 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-682353 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 16:17:58.179901  624315 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 16:17:58.179958  624315 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 16:17:58.206197  624315 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1202 16:17:58.206227  624315 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1202 16:17:58.206292  624315 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 16:17:58.206301  624315 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 16:17:58.206310  624315 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 16:17:58.206335  624315 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 16:17:58.206332  624315 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1202 16:17:58.206358  624315 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 16:17:58.206378  624315 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 16:17:58.206345  624315 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1202 16:17:58.207688  624315 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1202 16:17:58.207691  624315 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 16:17:58.207691  624315 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 16:17:58.207693  624315 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 16:17:58.207691  624315 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 16:17:58.207689  624315 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 16:17:58.207798  624315 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 16:17:58.207691  624315 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1202 16:17:58.389649  624315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 16:17:58.398497  624315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1202 16:17:58.416062  624315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 16:17:58.417018  624315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1202 16:17:58.427952  624315 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46" in container runtime
	I1202 16:17:58.428006  624315 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 16:17:58.428060  624315 ssh_runner.go:195] Run: which crictl
	I1202 16:17:58.428438  624315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1202 16:17:58.437511  624315 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1202 16:17:58.437565  624315 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1202 16:17:58.437615  624315 ssh_runner.go:195] Run: which crictl
	I1202 16:17:58.460811  624315 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1202 16:17:58.460830  624315 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b" in container runtime
	I1202 16:17:58.460863  624315 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 16:17:58.460867  624315 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 16:17:58.460871  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 16:17:58.460903  624315 ssh_runner.go:195] Run: which crictl
	I1202 16:17:58.460905  624315 ssh_runner.go:195] Run: which crictl
	I1202 16:17:58.468480  624315 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1202 16:17:58.468534  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 16:17:58.468536  624315 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1202 16:17:58.468689  624315 ssh_runner.go:195] Run: which crictl
	I1202 16:17:58.482843  624315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 16:17:58.490077  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 16:17:58.490149  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 16:17:58.490175  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 16:17:58.494233  624315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 16:17:58.502739  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 16:17:58.502866  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 16:17:58.534397  624315 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810" in container runtime
	I1202 16:17:58.534456  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 16:17:58.534461  624315 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 16:17:58.534509  624315 ssh_runner.go:195] Run: which crictl
	I1202 16:17:58.534571  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 16:17:58.534725  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 16:17:58.543764  624315 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc" in container runtime
	I1202 16:17:58.543814  624315 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 16:17:58.543823  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 16:17:58.543866  624315 ssh_runner.go:195] Run: which crictl
	I1202 16:17:58.546006  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 16:17:58.568541  624315 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1202 16:17:58.568608  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 16:17:58.568652  624315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 16:17:58.568944  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 16:17:58.569405  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 16:17:58.598889  624315 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0
	I1202 16:17:58.598908  624315 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1202 16:17:58.598916  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 16:17:58.598908  624315 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1202 16:17:58.599016  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (17239040 bytes)
	I1202 16:17:58.598911  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 16:17:58.598914  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 16:17:58.598995  624315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1202 16:17:58.598996  624315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 16:17:58.605101  624315 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1202 16:17:58.605197  624315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1202 16:17:58.660277  624315 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1202 16:17:58.660294  624315 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1202 16:17:58.660307  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 16:17:58.660308  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (27682304 bytes)
	I1202 16:17:58.660293  624315 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1202 16:17:58.660346  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1202 16:17:58.660376  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 16:17:58.660397  624315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1202 16:17:58.677906  624315 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1202 16:17:58.677939  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1202 16:17:58.732749  624315 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1202 16:17:58.732792  624315 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1202 16:17:58.732793  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1202 16:17:58.732887  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 16:17:58.732894  624315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 16:17:58.789503  624315 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1202 16:17:58.789541  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (25788928 bytes)
	I1202 16:17:58.813684  624315 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1202 16:17:58.813789  624315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 16:17:58.820547  624315 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1202 16:17:58.820618  624315 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1202 16:17:58.860692  624315 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1202 16:17:58.860733  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (23131648 bytes)
	I1202 16:17:59.246230  624315 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1202 16:17:59.246281  624315 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 16:17:59.246353  624315 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 16:17:59.486290  624315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 16:18:00.367652  624315 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.121268658s)
	I1202 16:18:00.367707  624315 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1202 16:18:00.367746  624315 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1202 16:18:00.367761  624315 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1202 16:18:00.367803  624315 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0
	I1202 16:18:00.367802  624315 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 16:18:00.367932  624315 ssh_runner.go:195] Run: which crictl
	I1202 16:18:01.667535  624315 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0: (1.299704769s)
	I1202 16:18:01.667566  624315 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1202 16:18:01.667582  624315 ssh_runner.go:235] Completed: which crictl: (1.299628705s)
	I1202 16:18:01.667642  624315 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1202 16:18:01.667689  624315 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1202 16:18:01.667644  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	W1202 16:18:03.191574  617021 pod_ready.go:104] pod "coredns-66bc5c9577-6h6nr" is not "Ready", error: <nil>
	W1202 16:18:05.191913  617021 pod_ready.go:104] pod "coredns-66bc5c9577-6h6nr" is not "Ready", error: <nil>
	W1202 16:18:07.192587  617021 pod_ready.go:104] pod "coredns-66bc5c9577-6h6nr" is not "Ready", error: <nil>
	I1202 16:18:02.923984  624315 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.256270074s)
	I1202 16:18:02.924014  624315 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1202 16:18:02.924039  624315 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 16:18:02.924074  624315 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.256318055s)
	I1202 16:18:02.924088  624315 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 16:18:02.924122  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 16:18:04.317999  624315 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.393888467s)
	I1202 16:18:04.318026  624315 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1202 16:18:04.318041  624315 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.393890267s)
	I1202 16:18:04.318057  624315 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 16:18:04.318112  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 16:18:04.318114  624315 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 16:18:04.344156  624315 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1202 16:18:04.344265  624315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1202 16:18:05.433824  624315 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.115591929s)
	I1202 16:18:05.433855  624315 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1202 16:18:05.433878  624315 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.089585497s)
	I1202 16:18:05.433893  624315 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 16:18:05.433913  624315 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1202 16:18:05.433937  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1202 16:18:05.433969  624315 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 16:18:06.583432  624315 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.149420095s)
	I1202 16:18:06.583469  624315 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1202 16:18:06.583499  624315 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1202 16:18:06.583549  624315 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1202 16:18:07.120218  624315 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1202 16:18:07.120270  624315 cache_images.go:125] Successfully loaded all cached images
	I1202 16:18:07.120277  624315 cache_images.go:94] duration metric: took 8.914034697s to LoadCachedImages
	I1202 16:18:07.120288  624315 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0-beta.0 crio true true} ...
	I1202 16:18:07.120393  624315 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-682353 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-682353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 16:18:07.120482  624315 ssh_runner.go:195] Run: crio config
	I1202 16:18:07.166513  624315 cni.go:84] Creating CNI manager for ""
	I1202 16:18:07.166534  624315 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 16:18:07.166549  624315 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1202 16:18:07.166572  624315 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-682353 NodeName:newest-cni-682353 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 16:18:07.166713  624315 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-682353"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 16:18:07.166783  624315 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 16:18:07.175146  624315 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1202 16:18:07.175204  624315 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 16:18:07.183178  624315 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256
	I1202 16:18:07.183195  624315 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1202 16:18:07.183241  624315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1202 16:18:07.183244  624315 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet.sha256
	I1202 16:18:07.183286  624315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1202 16:18:07.183302  624315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 16:18:07.188024  624315 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1202 16:18:07.188056  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (58589368 bytes)
	I1202 16:18:07.201415  624315 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1202 16:18:07.201452  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (72364216 bytes)
	I1202 16:18:07.201516  624315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1202 16:18:07.221248  624315 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1202 16:18:07.221288  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (58106148 bytes)
	I1202 16:18:07.716552  624315 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 16:18:07.724534  624315 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1202 16:18:07.737551  624315 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 16:18:07.778722  624315 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1202 16:18:07.792804  624315 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1202 16:18:07.796626  624315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 16:18:07.838782  624315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 16:18:07.929263  624315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 16:18:07.956172  624315 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353 for IP: 192.168.103.2
	I1202 16:18:07.956193  624315 certs.go:195] generating shared ca certs ...
	I1202 16:18:07.956208  624315 certs.go:227] acquiring lock for ca certs: {Name:mk039ff27816ff98157f54038cc23b17e408fc34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:18:07.956374  624315 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-264555/.minikube/ca.key
	I1202 16:18:07.956413  624315 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.key
	I1202 16:18:07.956436  624315 certs.go:257] generating profile certs ...
	I1202 16:18:07.956496  624315 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/client.key
	I1202 16:18:07.956510  624315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/client.crt with IP's: []
	I1202 16:18:08.055915  624315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/client.crt ...
	I1202 16:18:08.055950  624315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/client.crt: {Name:mkbae4e216b534e22a7a22b5211ba0f085fa0a0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:18:08.056133  624315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/client.key ...
	I1202 16:18:08.056145  624315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/client.key: {Name:mk01dd2149dcd5f6287686ae6bf7579abf16ae6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:18:08.056231  624315 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/apiserver.key.5833a0e0
	I1202 16:18:08.056247  624315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/apiserver.crt.5833a0e0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1202 16:18:08.454875  624315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/apiserver.crt.5833a0e0 ...
	I1202 16:18:08.454909  624315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/apiserver.crt.5833a0e0: {Name:mk34e2dbb313339f9326d6e80e3c7620a9f90d47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:18:08.455091  624315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/apiserver.key.5833a0e0 ...
	I1202 16:18:08.455107  624315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/apiserver.key.5833a0e0: {Name:mk521f77ecbe6526d4308034abb99ca52329446f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:18:08.455185  624315 certs.go:382] copying /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/apiserver.crt.5833a0e0 -> /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/apiserver.crt
	I1202 16:18:08.455260  624315 certs.go:386] copying /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/apiserver.key.5833a0e0 -> /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/apiserver.key
	I1202 16:18:08.455314  624315 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/proxy-client.key
	I1202 16:18:08.455328  624315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/proxy-client.crt with IP's: []
	I1202 16:18:08.725997  624315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/proxy-client.crt ...
	I1202 16:18:08.726029  624315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/proxy-client.crt: {Name:mk2542633dc1eea73aaea75c9b720c86ebeab857 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:18:08.726243  624315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/proxy-client.key ...
	I1202 16:18:08.726261  624315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/proxy-client.key: {Name:mkd92fd9f3993b30fa9a53ce61ae93d417dab751 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:18:08.726487  624315 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099.pem (1338 bytes)
	W1202 16:18:08.726533  624315 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099_empty.pem, impossibly tiny 0 bytes
	I1202 16:18:08.726543  624315 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem (1679 bytes)
	I1202 16:18:08.726568  624315 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem (1082 bytes)
	I1202 16:18:08.726598  624315 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem (1123 bytes)
	I1202 16:18:08.726621  624315 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem (1675 bytes)
	I1202 16:18:08.726661  624315 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem (1708 bytes)
	I1202 16:18:08.727246  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 16:18:08.746871  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 16:18:08.766275  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 16:18:08.786826  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 16:18:08.805357  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 16:18:08.823135  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 16:18:08.840900  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 16:18:08.858454  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 16:18:08.876051  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem --> /usr/share/ca-certificates/2680992.pem (1708 bytes)
	I1202 16:18:08.896345  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 16:18:08.914496  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099.pem --> /usr/share/ca-certificates/268099.pem (1338 bytes)
	I1202 16:18:08.933005  624315 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 16:18:08.946072  624315 ssh_runner.go:195] Run: openssl version
	I1202 16:18:08.952558  624315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2680992.pem && ln -fs /usr/share/ca-certificates/2680992.pem /etc/ssl/certs/2680992.pem"
	I1202 16:18:08.961641  624315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2680992.pem
	I1202 16:18:08.965532  624315 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 15:33 /usr/share/ca-certificates/2680992.pem
	I1202 16:18:08.965592  624315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2680992.pem
	I1202 16:18:09.000522  624315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2680992.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 16:18:09.010236  624315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 16:18:09.019164  624315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:18:09.023057  624315 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 15:16 /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:18:09.023101  624315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:18:09.058108  624315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 16:18:09.067191  624315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/268099.pem && ln -fs /usr/share/ca-certificates/268099.pem /etc/ssl/certs/268099.pem"
	I1202 16:18:09.075994  624315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/268099.pem
	I1202 16:18:09.079911  624315 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 15:33 /usr/share/ca-certificates/268099.pem
	I1202 16:18:09.079961  624315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/268099.pem
	I1202 16:18:09.114442  624315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/268099.pem /etc/ssl/certs/51391683.0"
	I1202 16:18:09.123256  624315 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 16:18:09.127070  624315 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 16:18:09.127130  624315 kubeadm.go:401] StartCluster: {Name:newest-cni-682353 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-682353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 16:18:09.127204  624315 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 16:18:09.127247  624315 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 16:18:09.154342  624315 cri.go:89] found id: ""
	I1202 16:18:09.154431  624315 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 16:18:09.162826  624315 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 16:18:09.170565  624315 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 16:18:09.170625  624315 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 16:18:09.178276  624315 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 16:18:09.178296  624315 kubeadm.go:158] found existing configuration files:
	
	I1202 16:18:09.178342  624315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 16:18:09.185869  624315 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 16:18:09.185937  624315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 16:18:09.194222  624315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 16:18:09.202327  624315 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 16:18:09.202390  624315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 16:18:09.210067  624315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 16:18:09.217901  624315 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 16:18:09.217973  624315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 16:18:09.225309  624315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 16:18:09.233081  624315 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 16:18:09.233139  624315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 16:18:09.240540  624315 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 16:18:09.277009  624315 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 16:18:09.277090  624315 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 16:18:09.343291  624315 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 16:18:09.343358  624315 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1202 16:18:09.343404  624315 kubeadm.go:319] OS: Linux
	I1202 16:18:09.343489  624315 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 16:18:09.343580  624315 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 16:18:09.343628  624315 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 16:18:09.343723  624315 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 16:18:09.343803  624315 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 16:18:09.343870  624315 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 16:18:09.343928  624315 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 16:18:09.343987  624315 kubeadm.go:319] CGROUPS_IO: enabled
	I1202 16:18:09.413832  624315 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 16:18:09.414018  624315 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 16:18:09.414143  624315 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 16:18:09.428395  624315 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 16:18:09.431691  624315 out.go:252]   - Generating certificates and keys ...
	I1202 16:18:09.431798  624315 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 16:18:09.431884  624315 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 16:18:09.551712  624315 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1202 16:18:09.619865  624315 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1202 16:18:09.700125  624315 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1202 16:18:09.785826  624315 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1202 16:18:10.002211  624315 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1202 16:18:10.002452  624315 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-682353] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1202 16:18:10.062821  624315 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1202 16:18:10.062997  624315 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-682353] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1202 16:18:10.262133  624315 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1202 16:18:10.339928  624315 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1202 16:18:10.406587  624315 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1202 16:18:10.406681  624315 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 16:18:10.473785  624315 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 16:18:10.509892  624315 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 16:18:10.565788  624315 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 16:18:10.713405  624315 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 16:18:10.837791  624315 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 16:18:10.838222  624315 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 16:18:10.844246  624315 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1202 16:18:09.691754  617021 pod_ready.go:104] pod "coredns-66bc5c9577-6h6nr" is not "Ready", error: <nil>
	I1202 16:18:11.692782  617021 pod_ready.go:94] pod "coredns-66bc5c9577-6h6nr" is "Ready"
	I1202 16:18:11.692815  617021 pod_ready.go:86] duration metric: took 37.507156807s for pod "coredns-66bc5c9577-6h6nr" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:11.696097  617021 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-806420" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:11.700661  617021 pod_ready.go:94] pod "etcd-default-k8s-diff-port-806420" is "Ready"
	I1202 16:18:11.700696  617021 pod_ready.go:86] duration metric: took 4.57279ms for pod "etcd-default-k8s-diff-port-806420" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:11.702761  617021 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-806420" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:11.707235  617021 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-806420" is "Ready"
	I1202 16:18:11.707259  617021 pod_ready.go:86] duration metric: took 4.477641ms for pod "kube-apiserver-default-k8s-diff-port-806420" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:11.710403  617021 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-806420" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:11.889880  617021 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-806420" is "Ready"
	I1202 16:18:11.889915  617021 pod_ready.go:86] duration metric: took 179.45256ms for pod "kube-controller-manager-default-k8s-diff-port-806420" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:12.090400  617021 pod_ready.go:83] waiting for pod "kube-proxy-574km" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:12.490369  617021 pod_ready.go:94] pod "kube-proxy-574km" is "Ready"
	I1202 16:18:12.490399  617021 pod_ready.go:86] duration metric: took 399.934021ms for pod "kube-proxy-574km" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:10.846051  624315 out.go:252]   - Booting up control plane ...
	I1202 16:18:10.846198  624315 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 16:18:10.846293  624315 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 16:18:10.847182  624315 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 16:18:10.861199  624315 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 16:18:10.861316  624315 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 16:18:10.868121  624315 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 16:18:10.868349  624315 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 16:18:10.868404  624315 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 16:18:10.971946  624315 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 16:18:10.972060  624315 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 16:18:11.473760  624315 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 502.060862ms
	I1202 16:18:11.478484  624315 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1202 16:18:11.478573  624315 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1202 16:18:11.478669  624315 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1202 16:18:11.478738  624315 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1202 16:18:12.484398  624315 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.005746987s
	I1202 16:18:12.691533  617021 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-806420" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:13.090663  617021 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-806420" is "Ready"
	I1202 16:18:13.090696  617021 pod_ready.go:86] duration metric: took 399.134187ms for pod "kube-scheduler-default-k8s-diff-port-806420" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:13.090710  617021 pod_ready.go:40] duration metric: took 38.908912326s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 16:18:13.137409  617021 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1202 16:18:13.139493  617021 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-806420" cluster and "default" namespace by default
	I1202 16:18:13.031004  624315 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.552408562s
	I1202 16:18:14.979810  624315 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501228669s
	I1202 16:18:14.998836  624315 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1202 16:18:15.009002  624315 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1202 16:18:15.018325  624315 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1202 16:18:15.018557  624315 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-682353 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1202 16:18:15.026607  624315 kubeadm.go:319] [bootstrap-token] Using token: 8ssxbw.m6ls5tgd8f1crjpp
	I1202 16:18:15.027945  624315 out.go:252]   - Configuring RBAC rules ...
	I1202 16:18:15.028111  624315 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1202 16:18:15.032080  624315 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1202 16:18:15.036812  624315 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1202 16:18:15.039329  624315 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1202 16:18:15.041644  624315 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1202 16:18:15.044054  624315 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1202 16:18:15.386977  624315 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1202 16:18:15.803838  624315 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1202 16:18:16.386472  624315 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1202 16:18:16.387384  624315 kubeadm.go:319] 
	I1202 16:18:16.387505  624315 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1202 16:18:16.387525  624315 kubeadm.go:319] 
	I1202 16:18:16.387625  624315 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1202 16:18:16.387634  624315 kubeadm.go:319] 
	I1202 16:18:16.387663  624315 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1202 16:18:16.387746  624315 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1202 16:18:16.387813  624315 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1202 16:18:16.387821  624315 kubeadm.go:319] 
	I1202 16:18:16.387891  624315 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1202 16:18:16.387900  624315 kubeadm.go:319] 
	I1202 16:18:16.387976  624315 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1202 16:18:16.387993  624315 kubeadm.go:319] 
	I1202 16:18:16.388066  624315 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1202 16:18:16.388174  624315 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1202 16:18:16.388272  624315 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1202 16:18:16.388281  624315 kubeadm.go:319] 
	I1202 16:18:16.388408  624315 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1202 16:18:16.388542  624315 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1202 16:18:16.388552  624315 kubeadm.go:319] 
	I1202 16:18:16.388679  624315 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 8ssxbw.m6ls5tgd8f1crjpp \
	I1202 16:18:16.388808  624315 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a700026e2fe1634919809d9050f2aa4b3e0ccbee543d4881e1cd695d56e7eef6 \
	I1202 16:18:16.388847  624315 kubeadm.go:319] 	--control-plane 
	I1202 16:18:16.388856  624315 kubeadm.go:319] 
	I1202 16:18:16.388968  624315 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1202 16:18:16.388977  624315 kubeadm.go:319] 
	I1202 16:18:16.389085  624315 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 8ssxbw.m6ls5tgd8f1crjpp \
	I1202 16:18:16.389209  624315 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a700026e2fe1634919809d9050f2aa4b3e0ccbee543d4881e1cd695d56e7eef6 
	I1202 16:18:16.391629  624315 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1202 16:18:16.391734  624315 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 16:18:16.391766  624315 cni.go:84] Creating CNI manager for ""
	I1202 16:18:16.391776  624315 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 16:18:16.393985  624315 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1202 16:18:16.395361  624315 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1202 16:18:16.400369  624315 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1202 16:18:16.400392  624315 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1202 16:18:16.415783  624315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1202 16:18:16.682346  624315 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1202 16:18:16.682492  624315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-682353 minikube.k8s.io/updated_at=2025_12_02T16_18_16_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689 minikube.k8s.io/name=newest-cni-682353 minikube.k8s.io/primary=true
	I1202 16:18:16.682679  624315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 16:18:16.695743  624315 ops.go:34] apiserver oom_adj: -16
	I1202 16:18:16.793011  624315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 16:18:17.293670  624315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	
	==> CRI-O <==
	Dec 02 16:17:33 embed-certs-046271 crio[569]: time="2025-12-02T16:17:33.613398583Z" level=info msg="Started container" PID=1731 containerID=4e3969029638e659ef754301ede73fb0488601bf95a80b6e49bd77a95e8d801f description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lwffd/kubernetes-dashboard id=7f13f7a0-696b-4fde-b464-e402cc27c3f6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ddd40757db9a700cb7416ee34aad7731e590c63887fefd84fbf1efe7973f0f10
	Dec 02 16:17:33 embed-certs-046271 crio[569]: time="2025-12-02T16:17:33.616338458Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 02 16:17:33 embed-certs-046271 crio[569]: time="2025-12-02T16:17:33.616361897Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 02 16:17:50 embed-certs-046271 crio[569]: time="2025-12-02T16:17:50.833468055Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=407636f1-9625-45ca-94ba-94a63cf3a387 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:17:50 embed-certs-046271 crio[569]: time="2025-12-02T16:17:50.834605237Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d7c6eb9c-b060-4b24-b8f5-1ad01db46be9 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:17:50 embed-certs-046271 crio[569]: time="2025-12-02T16:17:50.835694516Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-w9gp4/dashboard-metrics-scraper" id=caa9491e-215b-401c-9b81-433ff808445f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:17:50 embed-certs-046271 crio[569]: time="2025-12-02T16:17:50.83582601Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:17:50 embed-certs-046271 crio[569]: time="2025-12-02T16:17:50.842114747Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:17:50 embed-certs-046271 crio[569]: time="2025-12-02T16:17:50.843276183Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:17:50 embed-certs-046271 crio[569]: time="2025-12-02T16:17:50.872038547Z" level=info msg="Created container 5e28f7fe26c307fa8fc13eefd2850928dc7490333d33240afbfe0450c17515a2: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-w9gp4/dashboard-metrics-scraper" id=caa9491e-215b-401c-9b81-433ff808445f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:17:50 embed-certs-046271 crio[569]: time="2025-12-02T16:17:50.87281716Z" level=info msg="Starting container: 5e28f7fe26c307fa8fc13eefd2850928dc7490333d33240afbfe0450c17515a2" id=9b0f62a1-ec7c-4c1c-bf89-8b2873d23d4e name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 16:17:50 embed-certs-046271 crio[569]: time="2025-12-02T16:17:50.874814783Z" level=info msg="Started container" PID=1793 containerID=5e28f7fe26c307fa8fc13eefd2850928dc7490333d33240afbfe0450c17515a2 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-w9gp4/dashboard-metrics-scraper id=9b0f62a1-ec7c-4c1c-bf89-8b2873d23d4e name=/runtime.v1.RuntimeService/StartContainer sandboxID=d270de3ac8ec8c28a7dde9ad93f4f6cca0089c323450254e5bed447e89767289
	Dec 02 16:17:50 embed-certs-046271 crio[569]: time="2025-12-02T16:17:50.954200997Z" level=info msg="Removing container: 0245db3ef9f53d0236ab142102478c2d2edfe1efed1d7645d048ee65025a225c" id=78412a44-f31d-4484-bebe-020ca987f11f name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 16:17:50 embed-certs-046271 crio[569]: time="2025-12-02T16:17:50.965138722Z" level=info msg="Removed container 0245db3ef9f53d0236ab142102478c2d2edfe1efed1d7645d048ee65025a225c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-w9gp4/dashboard-metrics-scraper" id=78412a44-f31d-4484-bebe-020ca987f11f name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 16:18:13 embed-certs-046271 crio[569]: time="2025-12-02T16:18:13.833409033Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=25d0e1f8-746e-4a03-80c3-582e022f9ad3 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:18:13 embed-certs-046271 crio[569]: time="2025-12-02T16:18:13.834465222Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=7fab009a-ca8e-43f3-b43b-b89b0ae08382 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:18:13 embed-certs-046271 crio[569]: time="2025-12-02T16:18:13.835734984Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-w9gp4/dashboard-metrics-scraper" id=8df0a2b4-0875-4371-afdc-a2b466ad015d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:18:13 embed-certs-046271 crio[569]: time="2025-12-02T16:18:13.835883888Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:18:13 embed-certs-046271 crio[569]: time="2025-12-02T16:18:13.843398764Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:18:13 embed-certs-046271 crio[569]: time="2025-12-02T16:18:13.843978795Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:18:13 embed-certs-046271 crio[569]: time="2025-12-02T16:18:13.87539387Z" level=info msg="Created container 286ad93e33bb4ff3d193a144de9658655d06b742be65e682ec9fa7e8d5f3a8f4: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-w9gp4/dashboard-metrics-scraper" id=8df0a2b4-0875-4371-afdc-a2b466ad015d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:18:13 embed-certs-046271 crio[569]: time="2025-12-02T16:18:13.876056847Z" level=info msg="Starting container: 286ad93e33bb4ff3d193a144de9658655d06b742be65e682ec9fa7e8d5f3a8f4" id=8e87dac6-f9fc-4583-9e51-b4546534f1d3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 16:18:13 embed-certs-046271 crio[569]: time="2025-12-02T16:18:13.878349939Z" level=info msg="Started container" PID=1833 containerID=286ad93e33bb4ff3d193a144de9658655d06b742be65e682ec9fa7e8d5f3a8f4 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-w9gp4/dashboard-metrics-scraper id=8e87dac6-f9fc-4583-9e51-b4546534f1d3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d270de3ac8ec8c28a7dde9ad93f4f6cca0089c323450254e5bed447e89767289
	Dec 02 16:18:14 embed-certs-046271 crio[569]: time="2025-12-02T16:18:14.018846445Z" level=info msg="Removing container: 5e28f7fe26c307fa8fc13eefd2850928dc7490333d33240afbfe0450c17515a2" id=8f85be42-9d36-4c5b-b2a5-3e58308e12e7 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 16:18:14 embed-certs-046271 crio[569]: time="2025-12-02T16:18:14.031873438Z" level=info msg="Removed container 5e28f7fe26c307fa8fc13eefd2850928dc7490333d33240afbfe0450c17515a2: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-w9gp4/dashboard-metrics-scraper" id=8f85be42-9d36-4c5b-b2a5-3e58308e12e7 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	286ad93e33bb4       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           5 seconds ago       Exited              dashboard-metrics-scraper   3                   d270de3ac8ec8       dashboard-metrics-scraper-6ffb444bf9-w9gp4   kubernetes-dashboard
	4e3969029638e       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   45 seconds ago      Running             kubernetes-dashboard        0                   ddd40757db9a7       kubernetes-dashboard-855c9754f9-lwffd        kubernetes-dashboard
	173719ad1715f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Running             storage-provisioner         1                   435347a1882eb       storage-provisioner                          kube-system
	1442f4434464d       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           56 seconds ago      Running             busybox                     1                   48f24064849e8       busybox                                      default
	7a01cce632ce4       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           56 seconds ago      Running             coredns                     0                   8eb2b33b7a99c       coredns-66bc5c9577-f2vhx                     kube-system
	d5c5bd19a7977       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           56 seconds ago      Running             kindnet-cni                 0                   9684efb97ce9a       kindnet-wpj6k                                kube-system
	378f281936523       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago      Exited              storage-provisioner         0                   435347a1882eb       storage-provisioner                          kube-system
	b7bbe4338eaa7       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           56 seconds ago      Running             kube-proxy                  0                   8b83459aa1591       kube-proxy-q9pxb                             kube-system
	c2276216c1487       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           59 seconds ago      Running             kube-scheduler              0                   ed2499503659a       kube-scheduler-embed-certs-046271            kube-system
	16ad18068e5f3       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           59 seconds ago      Running             etcd                        0                   1349215b79b4b       etcd-embed-certs-046271                      kube-system
	3bf3111e94363       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           59 seconds ago      Running             kube-apiserver              0                   0975481c1d9dd       kube-apiserver-embed-certs-046271            kube-system
	698ef956828ff       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           59 seconds ago      Running             kube-controller-manager     0                   2dd3ccb60e646       kube-controller-manager-embed-certs-046271   kube-system
	
	
	==> coredns [7a01cce632ce473b9f13d211e46baeb90d009c8a9471ff0c1ed098c62fef035b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59409 - 55555 "HINFO IN 8696943024032006366.2951031596223566118. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.022692505s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-046271
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-046271
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=embed-certs-046271
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T16_16_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 16:16:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-046271
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 16:18:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 16:17:53 +0000   Tue, 02 Dec 2025 16:16:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 16:17:53 +0000   Tue, 02 Dec 2025 16:16:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 16:17:53 +0000   Tue, 02 Dec 2025 16:16:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 16:17:53 +0000   Tue, 02 Dec 2025 16:16:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-046271
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                e2b6e9a3-1779-45e2-a9a6-d48b0dea91ba
	  Boot ID:                    e00bac56-b076-4861-bc22-5d3b11269f73
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-f2vhx                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     110s
	  kube-system                 etcd-embed-certs-046271                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-wpj6k                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-embed-certs-046271             250m (3%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-embed-certs-046271    200m (2%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-q9pxb                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-embed-certs-046271             100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-w9gp4    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-lwffd         0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 109s               kube-proxy       
	  Normal  Starting                 56s                kube-proxy       
	  Normal  Starting                 116s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  115s               kubelet          Node embed-certs-046271 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s               kubelet          Node embed-certs-046271 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s               kubelet          Node embed-certs-046271 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           111s               node-controller  Node embed-certs-046271 event: Registered Node embed-certs-046271 in Controller
	  Normal  NodeReady                99s                kubelet          Node embed-certs-046271 status is now: NodeReady
	  Normal  Starting                 60s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 60s)  kubelet          Node embed-certs-046271 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)  kubelet          Node embed-certs-046271 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 60s)  kubelet          Node embed-certs-046271 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           54s                node-controller  Node embed-certs-046271 event: Registered Node embed-certs-046271 in Controller
	
	
	==> dmesg <==
	[  +0.000023] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[Dec 2 16:14] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ca bc 15 8e 4f 39 08 06
	[  +0.202375] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4a 25 86 21 45 76 08 06
	[  +7.441346] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 50 97 74 77 f9 08 06
	[  +0.000311] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 8c 8a 4d de f7 08 06
	[Dec 2 16:15] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 87 56 d2 46 1b 08 06
	[  +0.000909] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4a 25 86 21 45 76 08 06
	[  +7.449328] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a 06 ef 04 0a 22 08 06
	[ +17.731920] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff ae 8e 5c 48 83 60 08 06
	[  +2.165442] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0e 0b db fb 54 af 08 06
	[  +0.000320] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 3a 06 ef 04 0a 22 08 06
	[ +14.651928] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 5d 2d 15 78 ec 08 06
	[  +0.000385] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 8e 5c 48 83 60 08 06
	
	
	==> etcd [16ad18068e5f3c997cc9fd8d07b82668917afab1c7be18e0282d7eaaa341d8c1] <==
	{"level":"warn","ts":"2025-12-02T16:17:21.366392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:21.376476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:21.384355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:21.392857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:21.400199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:21.408295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:21.416039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:21.423030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:21.431920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:21.456982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:21.463989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:21.471521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:21.479185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:21.486053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:21.497820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:21.505835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:21.513318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:21.519857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:21.528468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:21.535807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:21.556808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:21.563624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:21.570243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:21.615593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48328","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-02T16:18:13.539717Z","caller":"traceutil/trace.go:172","msg":"trace[681584741] transaction","detail":"{read_only:false; response_revision:648; number_of_response:1; }","duration":"151.034126ms","start":"2025-12-02T16:18:13.388649Z","end":"2025-12-02T16:18:13.539683Z","steps":["trace[681584741] 'process raft request'  (duration: 63.041073ms)","trace[681584741] 'compare'  (duration: 87.805684ms)"],"step_count":2}
	
	
	==> kernel <==
	 16:18:19 up  3:00,  0 user,  load average: 3.68, 4.02, 2.72
	Linux embed-certs-046271 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d5c5bd19a797757b7b8b4e7b9bd03cab31a65007d9e44dc873e73b592003f935] <==
	I1202 16:17:23.479536       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1202 16:17:23.479763       1 main.go:148] setting mtu 1500 for CNI 
	I1202 16:17:23.479792       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 16:17:23.479822       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T16:17:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 16:17:23.588065       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 16:17:23.588117       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 16:17:23.588134       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 16:17:23.588616       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1202 16:17:23.603866       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1202 16:17:23.604022       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1202 16:17:25.088531       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 16:17:25.088575       1 metrics.go:72] Registering metrics
	I1202 16:17:25.088714       1 controller.go:711] "Syncing nftables rules"
	I1202 16:17:33.588498       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1202 16:17:33.588577       1 main.go:301] handling current node
	I1202 16:17:43.592501       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1202 16:17:43.592535       1 main.go:301] handling current node
	I1202 16:17:53.588921       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1202 16:17:53.588956       1 main.go:301] handling current node
	I1202 16:18:03.589504       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1202 16:18:03.589556       1 main.go:301] handling current node
	I1202 16:18:13.594619       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1202 16:18:13.594651       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3bf3111e9436304c788bb6ef52a85daf72acb7556f1bd1e4dbd20f1c48b40884] <==
	I1202 16:17:22.109797       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1202 16:17:22.109811       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1202 16:17:22.107629       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1202 16:17:22.109854       1 aggregator.go:171] initial CRD sync complete...
	I1202 16:17:22.109862       1 autoregister_controller.go:144] Starting autoregister controller
	I1202 16:17:22.109867       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1202 16:17:22.109873       1 cache.go:39] Caches are synced for autoregister controller
	I1202 16:17:22.120755       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1202 16:17:22.120915       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1202 16:17:22.121555       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1202 16:17:22.133023       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1202 16:17:22.133139       1 policy_source.go:240] refreshing policies
	I1202 16:17:22.151685       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 16:17:22.172608       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 16:17:22.430253       1 controller.go:667] quota admission added evaluator for: namespaces
	I1202 16:17:22.459542       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1202 16:17:22.480151       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 16:17:22.489086       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 16:17:22.496276       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 16:17:22.540900       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.164.34"}
	I1202 16:17:22.554641       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.214.15"}
	I1202 16:17:23.009784       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1202 16:17:25.510883       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 16:17:25.660716       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 16:17:25.961116       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [698ef956828ff2ca307684a986b76f4d7810277a835e5153b0a6cfc108ff4852] <==
	I1202 16:17:25.416930       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1202 16:17:25.418096       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1202 16:17:25.420316       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1202 16:17:25.422596       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1202 16:17:25.432932       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1202 16:17:25.439189       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1202 16:17:25.440375       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1202 16:17:25.454833       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 16:17:25.457525       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1202 16:17:25.457538       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1202 16:17:25.457566       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1202 16:17:25.457596       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1202 16:17:25.457636       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1202 16:17:25.457677       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1202 16:17:25.457746       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1202 16:17:25.459316       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1202 16:17:25.461561       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 16:17:25.461589       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1202 16:17:25.463394       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1202 16:17:25.463477       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1202 16:17:25.463553       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-046271"
	I1202 16:17:25.463598       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1202 16:17:25.464916       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1202 16:17:25.467215       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1202 16:17:25.482159       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [b7bbe4338eaa713a9f46532f0d5f4f8fdd4e7eb320af43e5d146a44067c124a7] <==
	I1202 16:17:23.264149       1 server_linux.go:53] "Using iptables proxy"
	I1202 16:17:23.346502       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 16:17:23.447539       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 16:17:23.447572       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1202 16:17:23.447686       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 16:17:23.466837       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 16:17:23.466899       1 server_linux.go:132] "Using iptables Proxier"
	I1202 16:17:23.472596       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 16:17:23.472997       1 server.go:527] "Version info" version="v1.34.2"
	I1202 16:17:23.473092       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 16:17:23.474177       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 16:17:23.474204       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 16:17:23.474298       1 config.go:200] "Starting service config controller"
	I1202 16:17:23.474314       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 16:17:23.474329       1 config.go:106] "Starting endpoint slice config controller"
	I1202 16:17:23.474342       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 16:17:23.474469       1 config.go:309] "Starting node config controller"
	I1202 16:17:23.474502       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 16:17:23.474511       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 16:17:23.574383       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 16:17:23.574442       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1202 16:17:23.574393       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [c2276216c1487f93e3277d2422350dd969b6a1c3c3470ca0ad9cf54e25deb70f] <==
	I1202 16:17:20.856151       1 serving.go:386] Generated self-signed cert in-memory
	W1202 16:17:22.052526       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1202 16:17:22.052591       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1202 16:17:22.052607       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1202 16:17:22.052617       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1202 16:17:22.103189       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1202 16:17:22.103227       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 16:17:22.106509       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1202 16:17:22.106975       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 16:17:22.107006       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 16:17:22.107033       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1202 16:17:22.207562       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 02 16:17:28 embed-certs-046271 kubelet[735]: I1202 16:17:28.880514     735 scope.go:117] "RemoveContainer" containerID="c6fbffd6fec06441cb182491663ecf598571f5c14a1754fae8d923edbb49ba79"
	Dec 02 16:17:29 embed-certs-046271 kubelet[735]: I1202 16:17:29.886239     735 scope.go:117] "RemoveContainer" containerID="c6fbffd6fec06441cb182491663ecf598571f5c14a1754fae8d923edbb49ba79"
	Dec 02 16:17:29 embed-certs-046271 kubelet[735]: I1202 16:17:29.886462     735 scope.go:117] "RemoveContainer" containerID="0245db3ef9f53d0236ab142102478c2d2edfe1efed1d7645d048ee65025a225c"
	Dec 02 16:17:29 embed-certs-046271 kubelet[735]: E1202 16:17:29.886672     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-w9gp4_kubernetes-dashboard(2f530710-0c0f-478b-b034-2ca8725a5222)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-w9gp4" podUID="2f530710-0c0f-478b-b034-2ca8725a5222"
	Dec 02 16:17:30 embed-certs-046271 kubelet[735]: I1202 16:17:30.588693     735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 02 16:17:30 embed-certs-046271 kubelet[735]: I1202 16:17:30.894616     735 scope.go:117] "RemoveContainer" containerID="0245db3ef9f53d0236ab142102478c2d2edfe1efed1d7645d048ee65025a225c"
	Dec 02 16:17:30 embed-certs-046271 kubelet[735]: E1202 16:17:30.897812     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-w9gp4_kubernetes-dashboard(2f530710-0c0f-478b-b034-2ca8725a5222)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-w9gp4" podUID="2f530710-0c0f-478b-b034-2ca8725a5222"
	Dec 02 16:17:33 embed-certs-046271 kubelet[735]: I1202 16:17:33.920150     735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lwffd" podStartSLOduration=0.712241732 podStartE2EDuration="7.920128512s" podCreationTimestamp="2025-12-02 16:17:26 +0000 UTC" firstStartedPulling="2025-12-02 16:17:26.359496702 +0000 UTC m=+6.621150412" lastFinishedPulling="2025-12-02 16:17:33.567383486 +0000 UTC m=+13.829037192" observedRunningTime="2025-12-02 16:17:33.920096314 +0000 UTC m=+14.181750037" watchObservedRunningTime="2025-12-02 16:17:33.920128512 +0000 UTC m=+14.181782235"
	Dec 02 16:17:38 embed-certs-046271 kubelet[735]: I1202 16:17:38.845102     735 scope.go:117] "RemoveContainer" containerID="0245db3ef9f53d0236ab142102478c2d2edfe1efed1d7645d048ee65025a225c"
	Dec 02 16:17:38 embed-certs-046271 kubelet[735]: E1202 16:17:38.845353     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-w9gp4_kubernetes-dashboard(2f530710-0c0f-478b-b034-2ca8725a5222)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-w9gp4" podUID="2f530710-0c0f-478b-b034-2ca8725a5222"
	Dec 02 16:17:50 embed-certs-046271 kubelet[735]: I1202 16:17:50.832757     735 scope.go:117] "RemoveContainer" containerID="0245db3ef9f53d0236ab142102478c2d2edfe1efed1d7645d048ee65025a225c"
	Dec 02 16:17:50 embed-certs-046271 kubelet[735]: I1202 16:17:50.952717     735 scope.go:117] "RemoveContainer" containerID="0245db3ef9f53d0236ab142102478c2d2edfe1efed1d7645d048ee65025a225c"
	Dec 02 16:17:50 embed-certs-046271 kubelet[735]: I1202 16:17:50.952935     735 scope.go:117] "RemoveContainer" containerID="5e28f7fe26c307fa8fc13eefd2850928dc7490333d33240afbfe0450c17515a2"
	Dec 02 16:17:50 embed-certs-046271 kubelet[735]: E1202 16:17:50.953166     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-w9gp4_kubernetes-dashboard(2f530710-0c0f-478b-b034-2ca8725a5222)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-w9gp4" podUID="2f530710-0c0f-478b-b034-2ca8725a5222"
	Dec 02 16:17:58 embed-certs-046271 kubelet[735]: I1202 16:17:58.845593     735 scope.go:117] "RemoveContainer" containerID="5e28f7fe26c307fa8fc13eefd2850928dc7490333d33240afbfe0450c17515a2"
	Dec 02 16:17:58 embed-certs-046271 kubelet[735]: E1202 16:17:58.845949     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-w9gp4_kubernetes-dashboard(2f530710-0c0f-478b-b034-2ca8725a5222)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-w9gp4" podUID="2f530710-0c0f-478b-b034-2ca8725a5222"
	Dec 02 16:18:13 embed-certs-046271 kubelet[735]: I1202 16:18:13.832916     735 scope.go:117] "RemoveContainer" containerID="5e28f7fe26c307fa8fc13eefd2850928dc7490333d33240afbfe0450c17515a2"
	Dec 02 16:18:14 embed-certs-046271 kubelet[735]: I1202 16:18:14.017399     735 scope.go:117] "RemoveContainer" containerID="5e28f7fe26c307fa8fc13eefd2850928dc7490333d33240afbfe0450c17515a2"
	Dec 02 16:18:14 embed-certs-046271 kubelet[735]: I1202 16:18:14.017653     735 scope.go:117] "RemoveContainer" containerID="286ad93e33bb4ff3d193a144de9658655d06b742be65e682ec9fa7e8d5f3a8f4"
	Dec 02 16:18:14 embed-certs-046271 kubelet[735]: E1202 16:18:14.017864     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-w9gp4_kubernetes-dashboard(2f530710-0c0f-478b-b034-2ca8725a5222)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-w9gp4" podUID="2f530710-0c0f-478b-b034-2ca8725a5222"
	Dec 02 16:18:14 embed-certs-046271 kubelet[735]: I1202 16:18:14.423495     735 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 02 16:18:14 embed-certs-046271 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 02 16:18:14 embed-certs-046271 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 02 16:18:14 embed-certs-046271 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 16:18:14 embed-certs-046271 systemd[1]: kubelet.service: Consumed 1.838s CPU time.
	
	
	==> kubernetes-dashboard [4e3969029638e659ef754301ede73fb0488601bf95a80b6e49bd77a95e8d801f] <==
	2025/12/02 16:17:33 Starting overwatch
	2025/12/02 16:17:33 Using namespace: kubernetes-dashboard
	2025/12/02 16:17:33 Using in-cluster config to connect to apiserver
	2025/12/02 16:17:33 Using secret token for csrf signing
	2025/12/02 16:17:33 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/02 16:17:33 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/02 16:17:33 Successful initial request to the apiserver, version: v1.34.2
	2025/12/02 16:17:33 Generating JWE encryption key
	2025/12/02 16:17:33 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/02 16:17:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/02 16:17:33 Initializing JWE encryption key from synchronized object
	2025/12/02 16:17:33 Creating in-cluster Sidecar client
	2025/12/02 16:17:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/02 16:17:33 Serving insecurely on HTTP port: 9090
	2025/12/02 16:18:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [173719ad1715f37da7c13d636a2be3e7910afc81ad6d92a88ab2d5268e0b4ad0] <==
	W1202 16:17:55.429458       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:57.432456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:57.436401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:59.440248       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:17:59.444219       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:01.447467       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:01.451171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:03.455096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:03.461085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:05.464594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:05.469108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:07.473124       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:07.478194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:09.481336       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:09.486307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:11.489802       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:11.493570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:13.540841       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:13.562982       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:15.567234       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:15.572691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:17.576290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:17.580294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:19.584036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:19.588016       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [378f281936523058e29cebe67cdf6b667293a6726f2577c5653947947d6210ab] <==
	I1202 16:17:23.227310       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1202 16:17:23.232071       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-046271 -n embed-certs-046271
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-046271 -n embed-certs-046271: exit status 2 (351.14343ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-046271 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-682353 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-682353 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (255.700303ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T16:18:22Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-682353 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-682353
helpers_test.go:243: (dbg) docker inspect newest-cni-682353:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a775ae5be075602a44fc7a11cf75bd1c7c4a445ea81c87d52e44f9d45bffd188",
	        "Created": "2025-12-02T16:17:54.498495762Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 625435,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T16:17:54.535824115Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/a775ae5be075602a44fc7a11cf75bd1c7c4a445ea81c87d52e44f9d45bffd188/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a775ae5be075602a44fc7a11cf75bd1c7c4a445ea81c87d52e44f9d45bffd188/hostname",
	        "HostsPath": "/var/lib/docker/containers/a775ae5be075602a44fc7a11cf75bd1c7c4a445ea81c87d52e44f9d45bffd188/hosts",
	        "LogPath": "/var/lib/docker/containers/a775ae5be075602a44fc7a11cf75bd1c7c4a445ea81c87d52e44f9d45bffd188/a775ae5be075602a44fc7a11cf75bd1c7c4a445ea81c87d52e44f9d45bffd188-json.log",
	        "Name": "/newest-cni-682353",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-682353:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-682353",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a775ae5be075602a44fc7a11cf75bd1c7c4a445ea81c87d52e44f9d45bffd188",
	                "LowerDir": "/var/lib/docker/overlay2/1f044ac655221173ad21f42de851de89b4294bf00ed7588a758e1c216c20f865-init/diff:/var/lib/docker/overlay2/ab98578cee54140c21ba2edb7c02601b9799fbaa027f05ce4daaae66d198c082/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1f044ac655221173ad21f42de851de89b4294bf00ed7588a758e1c216c20f865/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1f044ac655221173ad21f42de851de89b4294bf00ed7588a758e1c216c20f865/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1f044ac655221173ad21f42de851de89b4294bf00ed7588a758e1c216c20f865/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-682353",
	                "Source": "/var/lib/docker/volumes/newest-cni-682353/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-682353",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-682353",
	                "name.minikube.sigs.k8s.io": "newest-cni-682353",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "064659fde266bb8a345f0bb120bf9f929f18e668f95d4badbf1d97d11040c93e",
	            "SandboxKey": "/var/run/docker/netns/064659fde266",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33260"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33261"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33264"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33262"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33263"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-682353": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3ac149f6cf88728cc866ee4dd469920e42598af4e720f482a4a4ddfe77f5ff8f",
	                    "EndpointID": "ca0f99cd01ed073e9091de1afe90874fedf0fe6221d74df82f68e04b9214e25f",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "f6:6d:5a:67:10:3c",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-682353",
	                        "a775ae5be075"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-682353 -n newest-cni-682353
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-682353 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-682353 logs -n 25: (1.183602956s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable dashboard -p old-k8s-version-380588 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ start   │ -p old-k8s-version-380588 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:17 UTC │
	│ addons  │ enable dashboard -p no-preload-534842 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:16 UTC │
	│ start   │ -p no-preload-534842 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:17 UTC │
	│ addons  │ enable metrics-server -p embed-certs-046271 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │                     │
	│ stop    │ -p embed-certs-046271 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:16 UTC │ 02 Dec 25 16:17 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-806420 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-806420 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-046271 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ start   │ -p embed-certs-046271 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:18 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-806420 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ start   │ -p default-k8s-diff-port-806420 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:18 UTC │
	│ image   │ old-k8s-version-380588 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ pause   │ -p old-k8s-version-380588 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │                     │
	│ image   │ no-preload-534842 image list --format=json                                                                                                                                                                                                           │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ pause   │ -p no-preload-534842 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │                     │
	│ delete  │ -p old-k8s-version-380588                                                                                                                                                                                                                            │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ delete  │ -p no-preload-534842                                                                                                                                                                                                                                 │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ delete  │ -p old-k8s-version-380588                                                                                                                                                                                                                            │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ start   │ -p newest-cni-682353 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-682353            │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:18 UTC │
	│ delete  │ -p no-preload-534842                                                                                                                                                                                                                                 │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ image   │ embed-certs-046271 image list --format=json                                                                                                                                                                                                          │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │ 02 Dec 25 16:18 UTC │
	│ pause   │ -p embed-certs-046271 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │                     │
	│ delete  │ -p embed-certs-046271                                                                                                                                                                                                                                │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-682353 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-682353            │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 16:17:52
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 16:17:52.644799  624315 out.go:360] Setting OutFile to fd 1 ...
	I1202 16:17:52.644911  624315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:17:52.644919  624315 out.go:374] Setting ErrFile to fd 2...
	I1202 16:17:52.644923  624315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:17:52.645119  624315 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 16:17:52.645660  624315 out.go:368] Setting JSON to false
	I1202 16:17:52.646996  624315 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":10814,"bootTime":1764681459,"procs":344,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 16:17:52.647061  624315 start.go:143] virtualization: kvm guest
	I1202 16:17:52.649119  624315 out.go:179] * [newest-cni-682353] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 16:17:52.650307  624315 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 16:17:52.650341  624315 notify.go:221] Checking for updates...
	I1202 16:17:52.652574  624315 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 16:17:52.653891  624315 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 16:17:52.655069  624315 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-264555/.minikube
	I1202 16:17:52.656462  624315 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 16:17:52.658069  624315 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 16:17:52.659797  624315 config.go:182] Loaded profile config "default-k8s-diff-port-806420": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 16:17:52.659881  624315 config.go:182] Loaded profile config "embed-certs-046271": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 16:17:52.659969  624315 config.go:182] Loaded profile config "no-preload-534842": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 16:17:52.660075  624315 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 16:17:52.686164  624315 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 16:17:52.686294  624315 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 16:17:52.756277  624315 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-02 16:17:52.744065656 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 16:17:52.756384  624315 docker.go:319] overlay module found
	I1202 16:17:52.760989  624315 out.go:179] * Using the docker driver based on user configuration
	I1202 16:17:52.762385  624315 start.go:309] selected driver: docker
	I1202 16:17:52.762402  624315 start.go:927] validating driver "docker" against <nil>
	I1202 16:17:52.762413  624315 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 16:17:52.762997  624315 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 16:17:52.838284  624315 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-12-02 16:17:52.827987453 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 16:17:52.838514  624315 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1202 16:17:52.838550  624315 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1202 16:17:52.838830  624315 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1202 16:17:52.844583  624315 out.go:179] * Using Docker driver with root privileges
	I1202 16:17:52.845732  624315 cni.go:84] Creating CNI manager for ""
	I1202 16:17:52.845789  624315 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 16:17:52.845805  624315 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 16:17:52.845898  624315 start.go:353] cluster config:
	{Name:newest-cni-682353 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-682353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 16:17:52.847341  624315 out.go:179] * Starting "newest-cni-682353" primary control-plane node in "newest-cni-682353" cluster
	I1202 16:17:52.848449  624315 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 16:17:52.850119  624315 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	W1202 16:17:48.735446  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	W1202 16:17:51.235172  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	I1202 16:17:52.851463  624315 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 16:17:52.851567  624315 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 16:17:52.877300  624315 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 16:17:52.877327  624315 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 16:17:53.534100  624315 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	W1202 16:17:53.777545  624315 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1202 16:17:53.777737  624315 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/config.json ...
	I1202 16:17:53.777776  624315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/config.json: {Name:mk1da2fd97ec61d8b0621ec4e77abec4e577dd62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:17:53.777919  624315 cache.go:107] acquiring lock: {Name:mk6b8eeb5270fa67a5a87f892f37de1ae4805f75 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:17:53.777969  624315 cache.go:243] Successfully downloaded all kic artifacts
	I1202 16:17:53.777962  624315 cache.go:107] acquiring lock: {Name:mk3f4d40fdf359ce0573637a386f14c0a310cdc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:17:53.778008  624315 start.go:360] acquireMachinesLock for newest-cni-682353: {Name:mkfed8f02380af59f92aa0b6f8ae02a29dbe0c8e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:17:53.777991  624315 cache.go:107] acquiring lock: {Name:mka2aa325920dfb2720f9036278856e8dac95446 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:17:53.778031  624315 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 16:17:53.778042  624315 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 144.394µs
	I1202 16:17:53.778050  624315 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 16:17:53.778058  624315 start.go:364] duration metric: took 37.681µs to acquireMachinesLock for "newest-cni-682353"
	I1202 16:17:53.778060  624315 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 16:17:53.778062  624315 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 116.303µs
	I1202 16:17:53.778072  624315 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 16:17:53.778078  624315 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 16:17:53.778080  624315 cache.go:107] acquiring lock: {Name:mkce5d795e0ca01a9ee3d674d001cd6e04bbbfba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:17:53.778090  624315 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 120.36µs
	I1202 16:17:53.778124  624315 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 16:17:53.778077  624315 start.go:93] Provisioning new machine with config: &{Name:newest-cni-682353 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-682353 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 16:17:53.778170  624315 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1202 16:17:53.778176  624315 start.go:125] createHost starting for "" (driver="docker")
	I1202 16:17:53.778182  624315 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 105.339µs
	I1202 16:17:53.778196  624315 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 16:17:53.778173  624315 cache.go:107] acquiring lock: {Name:mk91bc91bcc535b3edd8200bf0c06e4d97781487 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:17:53.778225  624315 cache.go:107] acquiring lock: {Name:mk17b77bf762047097cbe060b18dc85ae78a9727 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:17:53.778251  624315 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 16:17:53.778262  624315 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 102.828µs
	I1202 16:17:53.778276  624315 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 16:17:53.778262  624315 cache.go:107] acquiring lock: {Name:mkec45cdfdbdafc0ef1296b9d77662a50add1cdf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:17:53.778298  624315 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 16:17:53.778312  624315 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 111.218µs
	I1202 16:17:53.778316  624315 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 16:17:53.778322  624315 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 16:17:53.778315  624315 cache.go:107] acquiring lock: {Name:mk821cef64e8468a2739d03d2e1019ac980bf2cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:17:53.778326  624315 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 75.311µs
	W1202 16:17:53.692416  617021 pod_ready.go:104] pod "coredns-66bc5c9577-6h6nr" is not "Ready", error: <nil>
	W1202 16:17:56.191595  617021 pod_ready.go:104] pod "coredns-66bc5c9577-6h6nr" is not "Ready", error: <nil>
	I1202 16:17:53.778371  624315 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 16:17:53.778382  624315 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 75.582µs
	I1202 16:17:53.778390  624315 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 16:17:53.778338  624315 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 16:17:53.778410  624315 cache.go:87] Successfully saved all images to host disk.
	I1202 16:17:53.788193  624315 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1202 16:17:53.788477  624315 start.go:159] libmachine.API.Create for "newest-cni-682353" (driver="docker")
	I1202 16:17:53.788525  624315 client.go:173] LocalClient.Create starting
	I1202 16:17:53.788618  624315 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem
	I1202 16:17:53.788655  624315 main.go:143] libmachine: Decoding PEM data...
	I1202 16:17:53.788673  624315 main.go:143] libmachine: Parsing certificate...
	I1202 16:17:53.788729  624315 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem
	I1202 16:17:53.788748  624315 main.go:143] libmachine: Decoding PEM data...
	I1202 16:17:53.788759  624315 main.go:143] libmachine: Parsing certificate...
	I1202 16:17:53.789143  624315 cli_runner.go:164] Run: docker network inspect newest-cni-682353 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1202 16:17:53.809974  624315 cli_runner.go:211] docker network inspect newest-cni-682353 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1202 16:17:53.810064  624315 network_create.go:284] running [docker network inspect newest-cni-682353] to gather additional debugging logs...
	I1202 16:17:53.810092  624315 cli_runner.go:164] Run: docker network inspect newest-cni-682353
	W1202 16:17:53.830774  624315 cli_runner.go:211] docker network inspect newest-cni-682353 returned with exit code 1
	I1202 16:17:53.830803  624315 network_create.go:287] error running [docker network inspect newest-cni-682353]: docker network inspect newest-cni-682353: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-682353 not found
	I1202 16:17:53.830820  624315 network_create.go:289] output of [docker network inspect newest-cni-682353]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-682353 not found
	
	** /stderr **
	I1202 16:17:53.830928  624315 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 16:17:53.851114  624315 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-59c4d474daec IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:20:cf:7a:79:c5} reservation:<nil>}
	I1202 16:17:53.851815  624315 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-208582b1a4af IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:26:5b:fe:2d:46:75} reservation:<nil>}
	I1202 16:17:53.852643  624315 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-230a00bd70ce IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:fe:8f:10:7f:8e:d3} reservation:<nil>}
	I1202 16:17:53.853252  624315 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-f242ea03e26e IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:3a:4d:9d:95:a5:56} reservation:<nil>}
	I1202 16:17:53.853946  624315 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-71c0f0496cc5 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:fa:9c:49:d2:0f:a1} reservation:<nil>}
	I1202 16:17:53.854357  624315 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-26f54f8ab80d IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:22:17:9c:97:61:b0} reservation:<nil>}
	I1202 16:17:53.854995  624315 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0027bef70}
	I1202 16:17:53.855021  624315 network_create.go:124] attempt to create docker network newest-cni-682353 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1202 16:17:53.855077  624315 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-682353 newest-cni-682353
	I1202 16:17:53.916078  624315 network_create.go:108] docker network newest-cni-682353 192.168.103.0/24 created
	I1202 16:17:53.916113  624315 kic.go:121] calculated static IP "192.168.103.2" for the "newest-cni-682353" container
	I1202 16:17:53.916193  624315 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1202 16:17:53.938300  624315 cli_runner.go:164] Run: docker volume create newest-cni-682353 --label name.minikube.sigs.k8s.io=newest-cni-682353 --label created_by.minikube.sigs.k8s.io=true
	I1202 16:17:53.959487  624315 oci.go:103] Successfully created a docker volume newest-cni-682353
	I1202 16:17:53.959565  624315 cli_runner.go:164] Run: docker run --rm --name newest-cni-682353-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-682353 --entrypoint /usr/bin/test -v newest-cni-682353:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1202 16:17:54.413877  624315 oci.go:107] Successfully prepared a docker volume newest-cni-682353
	I1202 16:17:54.413932  624315 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	W1202 16:17:54.414025  624315 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1202 16:17:54.414062  624315 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1202 16:17:54.414111  624315 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1202 16:17:54.479852  624315 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-682353 --name newest-cni-682353 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-682353 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-682353 --network newest-cni-682353 --ip 192.168.103.2 --volume newest-cni-682353:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1202 16:17:54.770265  624315 cli_runner.go:164] Run: docker container inspect newest-cni-682353 --format={{.State.Running}}
	I1202 16:17:54.793036  624315 cli_runner.go:164] Run: docker container inspect newest-cni-682353 --format={{.State.Status}}
	I1202 16:17:54.812286  624315 cli_runner.go:164] Run: docker exec newest-cni-682353 stat /var/lib/dpkg/alternatives/iptables
	I1202 16:17:54.862730  624315 oci.go:144] the created container "newest-cni-682353" has a running status.
	I1202 16:17:54.862790  624315 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22021-264555/.minikube/machines/newest-cni-682353/id_rsa...
	I1202 16:17:55.048010  624315 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22021-264555/.minikube/machines/newest-cni-682353/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1202 16:17:55.089312  624315 cli_runner.go:164] Run: docker container inspect newest-cni-682353 --format={{.State.Status}}
	I1202 16:17:55.111220  624315 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1202 16:17:55.111254  624315 kic_runner.go:114] Args: [docker exec --privileged newest-cni-682353 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1202 16:17:55.162548  624315 cli_runner.go:164] Run: docker container inspect newest-cni-682353 --format={{.State.Status}}
	I1202 16:17:55.185203  624315 machine.go:94] provisionDockerMachine start ...
	I1202 16:17:55.185292  624315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:17:55.206996  624315 main.go:143] libmachine: Using SSH client type: native
	I1202 16:17:55.207255  624315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33260 <nil> <nil>}
	I1202 16:17:55.207290  624315 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 16:17:55.350585  624315 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-682353
	
	I1202 16:17:55.350623  624315 ubuntu.go:182] provisioning hostname "newest-cni-682353"
	I1202 16:17:55.350718  624315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:17:55.371361  624315 main.go:143] libmachine: Using SSH client type: native
	I1202 16:17:55.371720  624315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33260 <nil> <nil>}
	I1202 16:17:55.371746  624315 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-682353 && echo "newest-cni-682353" | sudo tee /etc/hostname
	I1202 16:17:55.527645  624315 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-682353
	
	I1202 16:17:55.527735  624315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:17:55.546178  624315 main.go:143] libmachine: Using SSH client type: native
	I1202 16:17:55.546465  624315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33260 <nil> <nil>}
	I1202 16:17:55.546490  624315 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-682353' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-682353/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-682353' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 16:17:55.688466  624315 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 16:17:55.688504  624315 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-264555/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-264555/.minikube}
	I1202 16:17:55.688529  624315 ubuntu.go:190] setting up certificates
	I1202 16:17:55.688543  624315 provision.go:84] configureAuth start
	I1202 16:17:55.688607  624315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-682353
	I1202 16:17:55.707288  624315 provision.go:143] copyHostCerts
	I1202 16:17:55.707359  624315 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem, removing ...
	I1202 16:17:55.707372  624315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem
	I1202 16:17:55.707483  624315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem (1082 bytes)
	I1202 16:17:55.707608  624315 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem, removing ...
	I1202 16:17:55.707622  624315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem
	I1202 16:17:55.707663  624315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem (1123 bytes)
	I1202 16:17:55.707741  624315 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem, removing ...
	I1202 16:17:55.707750  624315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem
	I1202 16:17:55.707784  624315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem (1675 bytes)
	I1202 16:17:55.707854  624315 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem org=jenkins.newest-cni-682353 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-682353]
	I1202 16:17:55.874758  624315 provision.go:177] copyRemoteCerts
	I1202 16:17:55.874827  624315 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 16:17:55.874887  624315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:17:55.893962  624315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33260 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/newest-cni-682353/id_rsa Username:docker}
	I1202 16:17:55.995114  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 16:17:56.016141  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 16:17:56.034903  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 16:17:56.053997  624315 provision.go:87] duration metric: took 365.435069ms to configureAuth
	I1202 16:17:56.054029  624315 ubuntu.go:206] setting minikube options for container-runtime
	I1202 16:17:56.054223  624315 config.go:182] Loaded profile config "newest-cni-682353": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 16:17:56.054345  624315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:17:56.073989  624315 main.go:143] libmachine: Using SSH client type: native
	I1202 16:17:56.074282  624315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33260 <nil> <nil>}
	I1202 16:17:56.074316  624315 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 16:17:56.371749  624315 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 16:17:56.371776  624315 machine.go:97] duration metric: took 1.186549675s to provisionDockerMachine
	I1202 16:17:56.371787  624315 client.go:176] duration metric: took 2.583251832s to LocalClient.Create
	I1202 16:17:56.371809  624315 start.go:167] duration metric: took 2.58333403s to libmachine.API.Create "newest-cni-682353"
	I1202 16:17:56.371819  624315 start.go:293] postStartSetup for "newest-cni-682353" (driver="docker")
	I1202 16:17:56.371833  624315 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 16:17:56.371892  624315 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 16:17:56.371933  624315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:17:56.393135  624315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33260 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/newest-cni-682353/id_rsa Username:docker}
	I1202 16:17:56.496995  624315 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 16:17:56.501038  624315 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 16:17:56.501074  624315 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 16:17:56.501087  624315 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-264555/.minikube/addons for local assets ...
	I1202 16:17:56.501151  624315 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-264555/.minikube/files for local assets ...
	I1202 16:17:56.501258  624315 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem -> 2680992.pem in /etc/ssl/certs
	I1202 16:17:56.501378  624315 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 16:17:56.509784  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem --> /etc/ssl/certs/2680992.pem (1708 bytes)
	I1202 16:17:56.531088  624315 start.go:296] duration metric: took 159.252805ms for postStartSetup
	I1202 16:17:56.531578  624315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-682353
	I1202 16:17:56.551457  624315 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/config.json ...
	I1202 16:17:56.551748  624315 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 16:17:56.551795  624315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:17:56.569785  624315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33260 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/newest-cni-682353/id_rsa Username:docker}
	I1202 16:17:56.667711  624315 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 16:17:56.672371  624315 start.go:128] duration metric: took 2.894159147s to createHost
	I1202 16:17:56.672398  624315 start.go:83] releasing machines lock for "newest-cni-682353", held for 2.894331813s
	I1202 16:17:56.672480  624315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-682353
	I1202 16:17:56.691026  624315 ssh_runner.go:195] Run: cat /version.json
	I1202 16:17:56.691068  624315 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 16:17:56.691085  624315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:17:56.691142  624315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:17:56.709964  624315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33260 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/newest-cni-682353/id_rsa Username:docker}
	I1202 16:17:56.710241  624315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33260 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/newest-cni-682353/id_rsa Username:docker}
	I1202 16:17:56.863891  624315 ssh_runner.go:195] Run: systemctl --version
	I1202 16:17:56.870804  624315 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 16:17:56.905199  624315 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 16:17:56.910177  624315 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 16:17:56.910251  624315 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 16:17:56.939092  624315 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1202 16:17:56.939115  624315 start.go:496] detecting cgroup driver to use...
	I1202 16:17:56.939145  624315 detect.go:190] detected "systemd" cgroup driver on host os
	I1202 16:17:56.939191  624315 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 16:17:56.955610  624315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 16:17:56.969366  624315 docker.go:218] disabling cri-docker service (if available) ...
	I1202 16:17:56.969454  624315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 16:17:56.986554  624315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 16:17:57.004854  624315 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 16:17:57.093269  624315 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 16:17:57.182306  624315 docker.go:234] disabling docker service ...
	I1202 16:17:57.182377  624315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 16:17:57.203634  624315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 16:17:57.217007  624315 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 16:17:57.302606  624315 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 16:17:57.390234  624315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 16:17:57.403746  624315 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 16:17:57.418993  624315 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 16:17:57.419043  624315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:57.429606  624315 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1202 16:17:57.429677  624315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:57.440021  624315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:57.449681  624315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:57.459146  624315 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 16:17:57.467927  624315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:57.477178  624315 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:57.491151  624315 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:17:57.501347  624315 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 16:17:57.509339  624315 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 16:17:57.517116  624315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 16:17:57.601755  624315 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 16:17:58.048791  624315 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 16:17:58.048863  624315 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 16:17:58.053242  624315 start.go:564] Will wait 60s for crictl version
	I1202 16:17:58.053300  624315 ssh_runner.go:195] Run: which crictl
	I1202 16:17:58.057049  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 16:17:58.081825  624315 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 16:17:58.081931  624315 ssh_runner.go:195] Run: crio --version
	I1202 16:17:58.110886  624315 ssh_runner.go:195] Run: crio --version
	I1202 16:17:58.141240  624315 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1202 16:17:58.142295  624315 cli_runner.go:164] Run: docker network inspect newest-cni-682353 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 16:17:58.161244  624315 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1202 16:17:58.165842  624315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 16:17:58.178580  624315 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1202 16:17:53.734533  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	W1202 16:17:56.234121  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	W1202 16:17:58.734313  615191 pod_ready.go:104] pod "coredns-66bc5c9577-f2vhx" is not "Ready", error: <nil>
	I1202 16:18:00.733749  615191 pod_ready.go:94] pod "coredns-66bc5c9577-f2vhx" is "Ready"
	I1202 16:18:00.733779  615191 pod_ready.go:86] duration metric: took 37.005697217s for pod "coredns-66bc5c9577-f2vhx" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:00.736453  615191 pod_ready.go:83] waiting for pod "etcd-embed-certs-046271" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:00.740534  615191 pod_ready.go:94] pod "etcd-embed-certs-046271" is "Ready"
	I1202 16:18:00.740565  615191 pod_ready.go:86] duration metric: took 4.089173ms for pod "etcd-embed-certs-046271" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:00.742727  615191 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-046271" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:00.746692  615191 pod_ready.go:94] pod "kube-apiserver-embed-certs-046271" is "Ready"
	I1202 16:18:00.746764  615191 pod_ready.go:86] duration metric: took 4.006581ms for pod "kube-apiserver-embed-certs-046271" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:00.748680  615191 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-046271" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:00.933918  615191 pod_ready.go:94] pod "kube-controller-manager-embed-certs-046271" is "Ready"
	I1202 16:18:00.933953  615191 pod_ready.go:86] duration metric: took 185.250717ms for pod "kube-controller-manager-embed-certs-046271" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:01.132237  615191 pod_ready.go:83] waiting for pod "kube-proxy-q9pxb" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:01.532378  615191 pod_ready.go:94] pod "kube-proxy-q9pxb" is "Ready"
	I1202 16:18:01.532413  615191 pod_ready.go:86] duration metric: took 400.148403ms for pod "kube-proxy-q9pxb" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:01.732266  615191 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-046271" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:02.132067  615191 pod_ready.go:94] pod "kube-scheduler-embed-certs-046271" is "Ready"
	I1202 16:18:02.132099  615191 pod_ready.go:86] duration metric: took 399.802212ms for pod "kube-scheduler-embed-certs-046271" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:02.132116  615191 pod_ready.go:40] duration metric: took 38.407682171s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 16:18:02.187882  615191 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1202 16:18:02.190701  615191 out.go:179] * Done! kubectl is now configured to use "embed-certs-046271" cluster and "default" namespace by default
	W1202 16:17:58.192329  617021 pod_ready.go:104] pod "coredns-66bc5c9577-6h6nr" is not "Ready", error: <nil>
	W1202 16:18:00.691537  617021 pod_ready.go:104] pod "coredns-66bc5c9577-6h6nr" is not "Ready", error: <nil>
	I1202 16:17:58.179767  624315 kubeadm.go:884] updating cluster {Name:newest-cni-682353 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-682353 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 16:17:58.179901  624315 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 16:17:58.179958  624315 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 16:17:58.206197  624315 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1202 16:17:58.206227  624315 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1202 16:17:58.206292  624315 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 16:17:58.206301  624315 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 16:17:58.206310  624315 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 16:17:58.206335  624315 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 16:17:58.206332  624315 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1202 16:17:58.206358  624315 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 16:17:58.206378  624315 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 16:17:58.206345  624315 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1202 16:17:58.207688  624315 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1202 16:17:58.207691  624315 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 16:17:58.207691  624315 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 16:17:58.207693  624315 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 16:17:58.207691  624315 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 16:17:58.207689  624315 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 16:17:58.207798  624315 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 16:17:58.207691  624315 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1202 16:17:58.389649  624315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 16:17:58.398497  624315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1202 16:17:58.416062  624315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 16:17:58.417018  624315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1202 16:17:58.427952  624315 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46" in container runtime
	I1202 16:17:58.428006  624315 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 16:17:58.428060  624315 ssh_runner.go:195] Run: which crictl
	I1202 16:17:58.428438  624315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1202 16:17:58.437511  624315 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1202 16:17:58.437565  624315 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1202 16:17:58.437615  624315 ssh_runner.go:195] Run: which crictl
	I1202 16:17:58.460811  624315 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1202 16:17:58.460830  624315 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b" in container runtime
	I1202 16:17:58.460863  624315 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 16:17:58.460867  624315 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 16:17:58.460871  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 16:17:58.460903  624315 ssh_runner.go:195] Run: which crictl
	I1202 16:17:58.460905  624315 ssh_runner.go:195] Run: which crictl
	I1202 16:17:58.468480  624315 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1202 16:17:58.468534  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 16:17:58.468536  624315 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1202 16:17:58.468689  624315 ssh_runner.go:195] Run: which crictl
	I1202 16:17:58.482843  624315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 16:17:58.490077  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 16:17:58.490149  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 16:17:58.490175  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 16:17:58.494233  624315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 16:17:58.502739  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 16:17:58.502866  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 16:17:58.534397  624315 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810" in container runtime
	I1202 16:17:58.534456  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 16:17:58.534461  624315 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 16:17:58.534509  624315 ssh_runner.go:195] Run: which crictl
	I1202 16:17:58.534571  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 16:17:58.534725  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 16:17:58.543764  624315 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc" in container runtime
	I1202 16:17:58.543814  624315 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 16:17:58.543823  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 16:17:58.543866  624315 ssh_runner.go:195] Run: which crictl
	I1202 16:17:58.546006  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 16:17:58.568541  624315 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1202 16:17:58.568608  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 16:17:58.568652  624315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 16:17:58.568944  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 16:17:58.569405  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 16:17:58.598889  624315 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0
	I1202 16:17:58.598908  624315 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1202 16:17:58.598916  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 16:17:58.598908  624315 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1202 16:17:58.599016  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (17239040 bytes)
	I1202 16:17:58.598911  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 16:17:58.598914  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 16:17:58.598995  624315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1202 16:17:58.598996  624315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 16:17:58.605101  624315 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1202 16:17:58.605197  624315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1202 16:17:58.660277  624315 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1202 16:17:58.660294  624315 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1202 16:17:58.660307  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 16:17:58.660308  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (27682304 bytes)
	I1202 16:17:58.660293  624315 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1202 16:17:58.660346  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1202 16:17:58.660376  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 16:17:58.660397  624315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1202 16:17:58.677906  624315 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1202 16:17:58.677939  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1202 16:17:58.732749  624315 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1202 16:17:58.732792  624315 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1202 16:17:58.732793  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1202 16:17:58.732887  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 16:17:58.732894  624315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 16:17:58.789503  624315 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1202 16:17:58.789541  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (25788928 bytes)
	I1202 16:17:58.813684  624315 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1202 16:17:58.813789  624315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 16:17:58.820547  624315 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1202 16:17:58.820618  624315 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1202 16:17:58.860692  624315 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1202 16:17:58.860733  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (23131648 bytes)
	I1202 16:17:59.246230  624315 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1202 16:17:59.246281  624315 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 16:17:59.246353  624315 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 16:17:59.486290  624315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 16:18:00.367652  624315 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.121268658s)
	I1202 16:18:00.367707  624315 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1202 16:18:00.367746  624315 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1202 16:18:00.367761  624315 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1202 16:18:00.367803  624315 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0
	I1202 16:18:00.367802  624315 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 16:18:00.367932  624315 ssh_runner.go:195] Run: which crictl
	I1202 16:18:01.667535  624315 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0: (1.299704769s)
	I1202 16:18:01.667566  624315 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1202 16:18:01.667582  624315 ssh_runner.go:235] Completed: which crictl: (1.299628705s)
	I1202 16:18:01.667642  624315 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1202 16:18:01.667689  624315 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1202 16:18:01.667644  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	W1202 16:18:03.191574  617021 pod_ready.go:104] pod "coredns-66bc5c9577-6h6nr" is not "Ready", error: <nil>
	W1202 16:18:05.191913  617021 pod_ready.go:104] pod "coredns-66bc5c9577-6h6nr" is not "Ready", error: <nil>
	W1202 16:18:07.192587  617021 pod_ready.go:104] pod "coredns-66bc5c9577-6h6nr" is not "Ready", error: <nil>
	I1202 16:18:02.923984  624315 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.256270074s)
	I1202 16:18:02.924014  624315 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1202 16:18:02.924039  624315 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 16:18:02.924074  624315 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.256318055s)
	I1202 16:18:02.924088  624315 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 16:18:02.924122  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 16:18:04.317999  624315 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.393888467s)
	I1202 16:18:04.318026  624315 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1202 16:18:04.318041  624315 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.393890267s)
	I1202 16:18:04.318057  624315 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 16:18:04.318112  624315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 16:18:04.318114  624315 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 16:18:04.344156  624315 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1202 16:18:04.344265  624315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1202 16:18:05.433824  624315 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.115591929s)
	I1202 16:18:05.433855  624315 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1202 16:18:05.433878  624315 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.089585497s)
	I1202 16:18:05.433893  624315 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 16:18:05.433913  624315 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1202 16:18:05.433937  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1202 16:18:05.433969  624315 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 16:18:06.583432  624315 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.149420095s)
	I1202 16:18:06.583469  624315 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1202 16:18:06.583499  624315 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1202 16:18:06.583549  624315 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1202 16:18:07.120218  624315 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1202 16:18:07.120270  624315 cache_images.go:125] Successfully loaded all cached images
	I1202 16:18:07.120277  624315 cache_images.go:94] duration metric: took 8.914034697s to LoadCachedImages
	I1202 16:18:07.120288  624315 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0-beta.0 crio true true} ...
	I1202 16:18:07.120393  624315 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-682353 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-682353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 16:18:07.120482  624315 ssh_runner.go:195] Run: crio config
	I1202 16:18:07.166513  624315 cni.go:84] Creating CNI manager for ""
	I1202 16:18:07.166534  624315 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 16:18:07.166549  624315 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1202 16:18:07.166572  624315 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-682353 NodeName:newest-cni-682353 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 16:18:07.166713  624315 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-682353"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 16:18:07.166783  624315 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 16:18:07.175146  624315 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1202 16:18:07.175204  624315 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 16:18:07.183178  624315 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256
	I1202 16:18:07.183195  624315 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1202 16:18:07.183241  624315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1202 16:18:07.183244  624315 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet.sha256
	I1202 16:18:07.183286  624315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1202 16:18:07.183302  624315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 16:18:07.188024  624315 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1202 16:18:07.188056  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (58589368 bytes)
	I1202 16:18:07.201415  624315 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1202 16:18:07.201452  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (72364216 bytes)
	I1202 16:18:07.201516  624315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1202 16:18:07.221248  624315 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1202 16:18:07.221288  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (58106148 bytes)
	I1202 16:18:07.716552  624315 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 16:18:07.724534  624315 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1202 16:18:07.737551  624315 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 16:18:07.778722  624315 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1202 16:18:07.792804  624315 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1202 16:18:07.796626  624315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 16:18:07.838782  624315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 16:18:07.929263  624315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 16:18:07.956172  624315 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353 for IP: 192.168.103.2
	I1202 16:18:07.956193  624315 certs.go:195] generating shared ca certs ...
	I1202 16:18:07.956208  624315 certs.go:227] acquiring lock for ca certs: {Name:mk039ff27816ff98157f54038cc23b17e408fc34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:18:07.956374  624315 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-264555/.minikube/ca.key
	I1202 16:18:07.956413  624315 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.key
	I1202 16:18:07.956436  624315 certs.go:257] generating profile certs ...
	I1202 16:18:07.956496  624315 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/client.key
	I1202 16:18:07.956510  624315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/client.crt with IP's: []
	I1202 16:18:08.055915  624315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/client.crt ...
	I1202 16:18:08.055950  624315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/client.crt: {Name:mkbae4e216b534e22a7a22b5211ba0f085fa0a0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:18:08.056133  624315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/client.key ...
	I1202 16:18:08.056145  624315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/client.key: {Name:mk01dd2149dcd5f6287686ae6bf7579abf16ae6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:18:08.056231  624315 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/apiserver.key.5833a0e0
	I1202 16:18:08.056247  624315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/apiserver.crt.5833a0e0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1202 16:18:08.454875  624315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/apiserver.crt.5833a0e0 ...
	I1202 16:18:08.454909  624315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/apiserver.crt.5833a0e0: {Name:mk34e2dbb313339f9326d6e80e3c7620a9f90d47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:18:08.455091  624315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/apiserver.key.5833a0e0 ...
	I1202 16:18:08.455107  624315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/apiserver.key.5833a0e0: {Name:mk521f77ecbe6526d4308034abb99ca52329446f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:18:08.455185  624315 certs.go:382] copying /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/apiserver.crt.5833a0e0 -> /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/apiserver.crt
	I1202 16:18:08.455260  624315 certs.go:386] copying /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/apiserver.key.5833a0e0 -> /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/apiserver.key
	I1202 16:18:08.455314  624315 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/proxy-client.key
	I1202 16:18:08.455328  624315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/proxy-client.crt with IP's: []
	I1202 16:18:08.725997  624315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/proxy-client.crt ...
	I1202 16:18:08.726029  624315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/proxy-client.crt: {Name:mk2542633dc1eea73aaea75c9b720c86ebeab857 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:18:08.726243  624315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/proxy-client.key ...
	I1202 16:18:08.726261  624315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/proxy-client.key: {Name:mkd92fd9f3993b30fa9a53ce61ae93d417dab751 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:18:08.726487  624315 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099.pem (1338 bytes)
	W1202 16:18:08.726533  624315 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099_empty.pem, impossibly tiny 0 bytes
	I1202 16:18:08.726543  624315 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem (1679 bytes)
	I1202 16:18:08.726568  624315 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem (1082 bytes)
	I1202 16:18:08.726598  624315 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem (1123 bytes)
	I1202 16:18:08.726621  624315 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem (1675 bytes)
	I1202 16:18:08.726661  624315 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem (1708 bytes)
	I1202 16:18:08.727246  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 16:18:08.746871  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 16:18:08.766275  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 16:18:08.786826  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 16:18:08.805357  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 16:18:08.823135  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 16:18:08.840900  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 16:18:08.858454  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 16:18:08.876051  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem --> /usr/share/ca-certificates/2680992.pem (1708 bytes)
	I1202 16:18:08.896345  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 16:18:08.914496  624315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099.pem --> /usr/share/ca-certificates/268099.pem (1338 bytes)
	I1202 16:18:08.933005  624315 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 16:18:08.946072  624315 ssh_runner.go:195] Run: openssl version
	I1202 16:18:08.952558  624315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2680992.pem && ln -fs /usr/share/ca-certificates/2680992.pem /etc/ssl/certs/2680992.pem"
	I1202 16:18:08.961641  624315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2680992.pem
	I1202 16:18:08.965532  624315 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 15:33 /usr/share/ca-certificates/2680992.pem
	I1202 16:18:08.965592  624315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2680992.pem
	I1202 16:18:09.000522  624315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2680992.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 16:18:09.010236  624315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 16:18:09.019164  624315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:18:09.023057  624315 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 15:16 /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:18:09.023101  624315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:18:09.058108  624315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 16:18:09.067191  624315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/268099.pem && ln -fs /usr/share/ca-certificates/268099.pem /etc/ssl/certs/268099.pem"
	I1202 16:18:09.075994  624315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/268099.pem
	I1202 16:18:09.079911  624315 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 15:33 /usr/share/ca-certificates/268099.pem
	I1202 16:18:09.079961  624315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/268099.pem
	I1202 16:18:09.114442  624315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/268099.pem /etc/ssl/certs/51391683.0"
	I1202 16:18:09.123256  624315 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 16:18:09.127070  624315 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 16:18:09.127130  624315 kubeadm.go:401] StartCluster: {Name:newest-cni-682353 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-682353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 16:18:09.127204  624315 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 16:18:09.127247  624315 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 16:18:09.154342  624315 cri.go:89] found id: ""
	I1202 16:18:09.154431  624315 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 16:18:09.162826  624315 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 16:18:09.170565  624315 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 16:18:09.170625  624315 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 16:18:09.178276  624315 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 16:18:09.178296  624315 kubeadm.go:158] found existing configuration files:
	
	I1202 16:18:09.178342  624315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 16:18:09.185869  624315 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 16:18:09.185937  624315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 16:18:09.194222  624315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 16:18:09.202327  624315 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 16:18:09.202390  624315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 16:18:09.210067  624315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 16:18:09.217901  624315 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 16:18:09.217973  624315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 16:18:09.225309  624315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 16:18:09.233081  624315 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 16:18:09.233139  624315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 16:18:09.240540  624315 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 16:18:09.277009  624315 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 16:18:09.277090  624315 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 16:18:09.343291  624315 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 16:18:09.343358  624315 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1202 16:18:09.343404  624315 kubeadm.go:319] OS: Linux
	I1202 16:18:09.343489  624315 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 16:18:09.343580  624315 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 16:18:09.343628  624315 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 16:18:09.343723  624315 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 16:18:09.343803  624315 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 16:18:09.343870  624315 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 16:18:09.343928  624315 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 16:18:09.343987  624315 kubeadm.go:319] CGROUPS_IO: enabled
	I1202 16:18:09.413832  624315 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 16:18:09.414018  624315 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 16:18:09.414143  624315 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 16:18:09.428395  624315 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 16:18:09.431691  624315 out.go:252]   - Generating certificates and keys ...
	I1202 16:18:09.431798  624315 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 16:18:09.431884  624315 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 16:18:09.551712  624315 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1202 16:18:09.619865  624315 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1202 16:18:09.700125  624315 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1202 16:18:09.785826  624315 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1202 16:18:10.002211  624315 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1202 16:18:10.002452  624315 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-682353] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1202 16:18:10.062821  624315 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1202 16:18:10.062997  624315 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-682353] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1202 16:18:10.262133  624315 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1202 16:18:10.339928  624315 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1202 16:18:10.406587  624315 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1202 16:18:10.406681  624315 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 16:18:10.473785  624315 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 16:18:10.509892  624315 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 16:18:10.565788  624315 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 16:18:10.713405  624315 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 16:18:10.837791  624315 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 16:18:10.838222  624315 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 16:18:10.844246  624315 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1202 16:18:09.691754  617021 pod_ready.go:104] pod "coredns-66bc5c9577-6h6nr" is not "Ready", error: <nil>
	I1202 16:18:11.692782  617021 pod_ready.go:94] pod "coredns-66bc5c9577-6h6nr" is "Ready"
	I1202 16:18:11.692815  617021 pod_ready.go:86] duration metric: took 37.507156807s for pod "coredns-66bc5c9577-6h6nr" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:11.696097  617021 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-806420" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:11.700661  617021 pod_ready.go:94] pod "etcd-default-k8s-diff-port-806420" is "Ready"
	I1202 16:18:11.700696  617021 pod_ready.go:86] duration metric: took 4.57279ms for pod "etcd-default-k8s-diff-port-806420" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:11.702761  617021 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-806420" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:11.707235  617021 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-806420" is "Ready"
	I1202 16:18:11.707259  617021 pod_ready.go:86] duration metric: took 4.477641ms for pod "kube-apiserver-default-k8s-diff-port-806420" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:11.710403  617021 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-806420" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:11.889880  617021 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-806420" is "Ready"
	I1202 16:18:11.889915  617021 pod_ready.go:86] duration metric: took 179.45256ms for pod "kube-controller-manager-default-k8s-diff-port-806420" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:12.090400  617021 pod_ready.go:83] waiting for pod "kube-proxy-574km" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:12.490369  617021 pod_ready.go:94] pod "kube-proxy-574km" is "Ready"
	I1202 16:18:12.490399  617021 pod_ready.go:86] duration metric: took 399.934021ms for pod "kube-proxy-574km" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:10.846051  624315 out.go:252]   - Booting up control plane ...
	I1202 16:18:10.846198  624315 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 16:18:10.846293  624315 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 16:18:10.847182  624315 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 16:18:10.861199  624315 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 16:18:10.861316  624315 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 16:18:10.868121  624315 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 16:18:10.868349  624315 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 16:18:10.868404  624315 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 16:18:10.971946  624315 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 16:18:10.972060  624315 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 16:18:11.473760  624315 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 502.060862ms
	I1202 16:18:11.478484  624315 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1202 16:18:11.478573  624315 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1202 16:18:11.478669  624315 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1202 16:18:11.478738  624315 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1202 16:18:12.484398  624315 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.005746987s
	I1202 16:18:12.691533  617021 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-806420" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:13.090663  617021 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-806420" is "Ready"
	I1202 16:18:13.090696  617021 pod_ready.go:86] duration metric: took 399.134187ms for pod "kube-scheduler-default-k8s-diff-port-806420" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 16:18:13.090710  617021 pod_ready.go:40] duration metric: took 38.908912326s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 16:18:13.137409  617021 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1202 16:18:13.139493  617021 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-806420" cluster and "default" namespace by default
	I1202 16:18:13.031004  624315 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.552408562s
	I1202 16:18:14.979810  624315 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501228669s
	I1202 16:18:14.998836  624315 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1202 16:18:15.009002  624315 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1202 16:18:15.018325  624315 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1202 16:18:15.018557  624315 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-682353 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1202 16:18:15.026607  624315 kubeadm.go:319] [bootstrap-token] Using token: 8ssxbw.m6ls5tgd8f1crjpp
	I1202 16:18:15.027945  624315 out.go:252]   - Configuring RBAC rules ...
	I1202 16:18:15.028111  624315 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1202 16:18:15.032080  624315 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1202 16:18:15.036812  624315 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1202 16:18:15.039329  624315 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1202 16:18:15.041644  624315 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1202 16:18:15.044054  624315 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1202 16:18:15.386977  624315 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1202 16:18:15.803838  624315 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1202 16:18:16.386472  624315 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1202 16:18:16.387384  624315 kubeadm.go:319] 
	I1202 16:18:16.387505  624315 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1202 16:18:16.387525  624315 kubeadm.go:319] 
	I1202 16:18:16.387625  624315 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1202 16:18:16.387634  624315 kubeadm.go:319] 
	I1202 16:18:16.387663  624315 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1202 16:18:16.387746  624315 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1202 16:18:16.387813  624315 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1202 16:18:16.387821  624315 kubeadm.go:319] 
	I1202 16:18:16.387891  624315 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1202 16:18:16.387900  624315 kubeadm.go:319] 
	I1202 16:18:16.387976  624315 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1202 16:18:16.387993  624315 kubeadm.go:319] 
	I1202 16:18:16.388066  624315 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1202 16:18:16.388174  624315 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1202 16:18:16.388272  624315 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1202 16:18:16.388281  624315 kubeadm.go:319] 
	I1202 16:18:16.388408  624315 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1202 16:18:16.388542  624315 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1202 16:18:16.388552  624315 kubeadm.go:319] 
	I1202 16:18:16.388679  624315 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 8ssxbw.m6ls5tgd8f1crjpp \
	I1202 16:18:16.388808  624315 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a700026e2fe1634919809d9050f2aa4b3e0ccbee543d4881e1cd695d56e7eef6 \
	I1202 16:18:16.388847  624315 kubeadm.go:319] 	--control-plane 
	I1202 16:18:16.388856  624315 kubeadm.go:319] 
	I1202 16:18:16.388968  624315 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1202 16:18:16.388977  624315 kubeadm.go:319] 
	I1202 16:18:16.389085  624315 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 8ssxbw.m6ls5tgd8f1crjpp \
	I1202 16:18:16.389209  624315 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a700026e2fe1634919809d9050f2aa4b3e0ccbee543d4881e1cd695d56e7eef6 
	I1202 16:18:16.391629  624315 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1202 16:18:16.391734  624315 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 16:18:16.391766  624315 cni.go:84] Creating CNI manager for ""
	I1202 16:18:16.391776  624315 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 16:18:16.393985  624315 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1202 16:18:16.395361  624315 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1202 16:18:16.400369  624315 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1202 16:18:16.400392  624315 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1202 16:18:16.415783  624315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1202 16:18:16.682346  624315 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1202 16:18:16.682492  624315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-682353 minikube.k8s.io/updated_at=2025_12_02T16_18_16_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689 minikube.k8s.io/name=newest-cni-682353 minikube.k8s.io/primary=true
	I1202 16:18:16.682679  624315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 16:18:16.695743  624315 ops.go:34] apiserver oom_adj: -16
	I1202 16:18:16.793011  624315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 16:18:17.293670  624315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 16:18:17.793991  624315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 16:18:18.293617  624315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 16:18:18.793636  624315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 16:18:19.294086  624315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 16:18:19.793674  624315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 16:18:20.293533  624315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 16:18:20.793164  624315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 16:18:20.863251  624315 kubeadm.go:1114] duration metric: took 4.180633438s to wait for elevateKubeSystemPrivileges
	I1202 16:18:20.863292  624315 kubeadm.go:403] duration metric: took 11.736166575s to StartCluster
	I1202 16:18:20.863319  624315 settings.go:142] acquiring lock: {Name:mkb00b5395affa5a80ee09f21cfed53b1afcd59c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:18:20.863396  624315 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 16:18:20.865335  624315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/kubeconfig: {Name:mk809d3f43352510256b48d000241cc8ee13f80d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:18:20.865607  624315 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1202 16:18:20.865613  624315 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 16:18:20.865713  624315 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 16:18:20.865800  624315 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-682353"
	I1202 16:18:20.865818  624315 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-682353"
	I1202 16:18:20.865834  624315 config.go:182] Loaded profile config "newest-cni-682353": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 16:18:20.865854  624315 host.go:66] Checking if "newest-cni-682353" exists ...
	I1202 16:18:20.865892  624315 addons.go:70] Setting default-storageclass=true in profile "newest-cni-682353"
	I1202 16:18:20.865908  624315 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-682353"
	I1202 16:18:20.866283  624315 cli_runner.go:164] Run: docker container inspect newest-cni-682353 --format={{.State.Status}}
	I1202 16:18:20.866602  624315 cli_runner.go:164] Run: docker container inspect newest-cni-682353 --format={{.State.Status}}
	I1202 16:18:20.868606  624315 out.go:179] * Verifying Kubernetes components...
	I1202 16:18:20.869964  624315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 16:18:20.892504  624315 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 16:18:20.893211  624315 addons.go:239] Setting addon default-storageclass=true in "newest-cni-682353"
	I1202 16:18:20.893254  624315 host.go:66] Checking if "newest-cni-682353" exists ...
	I1202 16:18:20.893718  624315 cli_runner.go:164] Run: docker container inspect newest-cni-682353 --format={{.State.Status}}
	I1202 16:18:20.893742  624315 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 16:18:20.893765  624315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 16:18:20.893819  624315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:18:20.930047  624315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33260 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/newest-cni-682353/id_rsa Username:docker}
	I1202 16:18:20.931925  624315 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 16:18:20.931984  624315 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 16:18:20.932046  624315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:18:20.957791  624315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33260 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/newest-cni-682353/id_rsa Username:docker}
	I1202 16:18:20.988590  624315 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1202 16:18:21.042814  624315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 16:18:21.060608  624315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 16:18:21.092394  624315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 16:18:21.191579  624315 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1202 16:18:21.192841  624315 api_server.go:52] waiting for apiserver process to appear ...
	I1202 16:18:21.192905  624315 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 16:18:21.401763  624315 api_server.go:72] duration metric: took 536.116426ms to wait for apiserver process to appear ...
	I1202 16:18:21.401790  624315 api_server.go:88] waiting for apiserver healthz status ...
	I1202 16:18:21.401811  624315 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1202 16:18:21.408002  624315 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1202 16:18:21.408831  624315 api_server.go:141] control plane version: v1.35.0-beta.0
	I1202 16:18:21.408863  624315 api_server.go:131] duration metric: took 7.065985ms to wait for apiserver health ...
	I1202 16:18:21.408872  624315 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 16:18:21.409095  624315 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1202 16:18:21.410510  624315 addons.go:530] duration metric: took 544.799839ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1202 16:18:21.411390  624315 system_pods.go:59] 8 kube-system pods found
	I1202 16:18:21.411414  624315 system_pods.go:61] "coredns-7d764666f9-jb9wz" [889f4af6-e976-4ec7-ae6e-ed5ec813fe4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1202 16:18:21.411448  624315 system_pods.go:61] "etcd-newest-cni-682353" [5ab9fd7e-9c55-45a2-ac07-46d797be98d1] Running
	I1202 16:18:21.411456  624315 system_pods.go:61] "kindnet-cxfrf" [164fac47-6c74-434b-b780-1ba1c2a40495] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1202 16:18:21.411464  624315 system_pods.go:61] "kube-apiserver-newest-cni-682353" [df312caa-500b-4c0b-bda0-f8acafcff8b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 16:18:21.411471  624315 system_pods.go:61] "kube-controller-manager-newest-cni-682353" [17765d5c-8f15-40da-886f-c807519c7e05] Running
	I1202 16:18:21.411476  624315 system_pods.go:61] "kube-proxy-srq78" [6d9b68b3-fb87-47f4-887a-3b1851999e6c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1202 16:18:21.411482  624315 system_pods.go:61] "kube-scheduler-newest-cni-682353" [6b53974a-1f7e-4d8a-bae6-24aa797c54d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 16:18:21.411492  624315 system_pods.go:61] "storage-provisioner" [c5d388c9-2f39-4c65-8e57-7846b28c1db8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1202 16:18:21.411499  624315 system_pods.go:74] duration metric: took 2.621563ms to wait for pod list to return data ...
	I1202 16:18:21.411506  624315 default_sa.go:34] waiting for default service account to be created ...
	I1202 16:18:21.413552  624315 default_sa.go:45] found service account: "default"
	I1202 16:18:21.413568  624315 default_sa.go:55] duration metric: took 2.057319ms for default service account to be created ...
	I1202 16:18:21.413578  624315 kubeadm.go:587] duration metric: took 547.938589ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1202 16:18:21.413593  624315 node_conditions.go:102] verifying NodePressure condition ...
	I1202 16:18:21.415714  624315 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 16:18:21.415736  624315 node_conditions.go:123] node cpu capacity is 8
	I1202 16:18:21.415751  624315 node_conditions.go:105] duration metric: took 2.154072ms to run NodePressure ...
	I1202 16:18:21.415762  624315 start.go:242] waiting for startup goroutines ...
	I1202 16:18:21.695083  624315 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-682353" context rescaled to 1 replicas
	I1202 16:18:21.695121  624315 start.go:247] waiting for cluster config update ...
	I1202 16:18:21.695133  624315 start.go:256] writing updated cluster config ...
	I1202 16:18:21.695398  624315 ssh_runner.go:195] Run: rm -f paused
	I1202 16:18:21.746536  624315 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1202 16:18:21.748227  624315 out.go:179] * Done! kubectl is now configured to use "newest-cni-682353" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 02 16:18:11 newest-cni-682353 crio[767]: time="2025-12-02T16:18:11.75762141Z" level=info msg="Started container" PID=2173 containerID=93edb25eb5feddeec87c9dbf5e53324d938cc655b60a7d6102e805a6c76f428b description=kube-system/kube-controller-manager-newest-cni-682353/kube-controller-manager id=1553b58a-1368-4ded-85b1-26e22b74411f name=/runtime.v1.RuntimeService/StartContainer sandboxID=939788663e52014bd80b0c1bb5b8c6552a1005c325b55d64b0c7e9ad1dafe445
	Dec 02 16:18:21 newest-cni-682353 crio[767]: time="2025-12-02T16:18:21.04486283Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-srq78/POD" id=c97cca99-7410-47b7-b971-334d994ee6f3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 16:18:21 newest-cni-682353 crio[767]: time="2025-12-02T16:18:21.044955235Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:18:21 newest-cni-682353 crio[767]: time="2025-12-02T16:18:21.04806082Z" level=info msg="Running pod sandbox: kube-system/kindnet-cxfrf/POD" id=6d123325-19f2-4641-92c2-e89635843896 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 16:18:21 newest-cni-682353 crio[767]: time="2025-12-02T16:18:21.048401529Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:18:21 newest-cni-682353 crio[767]: time="2025-12-02T16:18:21.051950905Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=6d123325-19f2-4641-92c2-e89635843896 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 16:18:21 newest-cni-682353 crio[767]: time="2025-12-02T16:18:21.052171302Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=c97cca99-7410-47b7-b971-334d994ee6f3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 16:18:21 newest-cni-682353 crio[767]: time="2025-12-02T16:18:21.05392152Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 02 16:18:21 newest-cni-682353 crio[767]: time="2025-12-02T16:18:21.054868624Z" level=info msg="Ran pod sandbox 9f44c01d95bd551aba129c8c478be8a00b7a4e9dc7a8da9f7e2c21b408ab54d8 with infra container: kube-system/kube-proxy-srq78/POD" id=c97cca99-7410-47b7-b971-334d994ee6f3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 16:18:21 newest-cni-682353 crio[767]: time="2025-12-02T16:18:21.056966813Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=58ac85d4-e723-49db-8603-6dd63c234a4c name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:18:21 newest-cni-682353 crio[767]: time="2025-12-02T16:18:21.058566018Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 02 16:18:21 newest-cni-682353 crio[767]: time="2025-12-02T16:18:21.059050676Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=7e73923d-72a3-467d-b602-4799392eea2c name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:18:21 newest-cni-682353 crio[767]: time="2025-12-02T16:18:21.05953734Z" level=info msg="Ran pod sandbox 66bf53556b44ffccf8a0e69b389a7b1c15b5b401af2506c91d2713471f47b683 with infra container: kube-system/kindnet-cxfrf/POD" id=6d123325-19f2-4641-92c2-e89635843896 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 16:18:21 newest-cni-682353 crio[767]: time="2025-12-02T16:18:21.061784004Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=7825c4e2-a0c2-4b7a-9000-1aec679dcd85 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:18:21 newest-cni-682353 crio[767]: time="2025-12-02T16:18:21.0619226Z" level=info msg="Image docker.io/kindest/kindnetd:v20250512-df8de77b not found" id=7825c4e2-a0c2-4b7a-9000-1aec679dcd85 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:18:21 newest-cni-682353 crio[767]: time="2025-12-02T16:18:21.061967982Z" level=info msg="Neither image nor artfiact docker.io/kindest/kindnetd:v20250512-df8de77b found" id=7825c4e2-a0c2-4b7a-9000-1aec679dcd85 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:18:21 newest-cni-682353 crio[767]: time="2025-12-02T16:18:21.064840151Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20250512-df8de77b" id=5ff0be5d-4795-46e5-b66f-ba997e5bcfb6 name=/runtime.v1.ImageService/PullImage
	Dec 02 16:18:21 newest-cni-682353 crio[767]: time="2025-12-02T16:18:21.065452825Z" level=info msg="Creating container: kube-system/kube-proxy-srq78/kube-proxy" id=db2881b2-4b55-4590-8541-7b28dd97baa8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:18:21 newest-cni-682353 crio[767]: time="2025-12-02T16:18:21.065612453Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:18:21 newest-cni-682353 crio[767]: time="2025-12-02T16:18:21.071101364Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:18:21 newest-cni-682353 crio[767]: time="2025-12-02T16:18:21.071146322Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20250512-df8de77b\""
	Dec 02 16:18:21 newest-cni-682353 crio[767]: time="2025-12-02T16:18:21.071760083Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:18:21 newest-cni-682353 crio[767]: time="2025-12-02T16:18:21.11566164Z" level=info msg="Created container e4268fba4024133575d9de518a577b07791ddd97e93d187014e276ed666b9e7f: kube-system/kube-proxy-srq78/kube-proxy" id=db2881b2-4b55-4590-8541-7b28dd97baa8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:18:21 newest-cni-682353 crio[767]: time="2025-12-02T16:18:21.116647895Z" level=info msg="Starting container: e4268fba4024133575d9de518a577b07791ddd97e93d187014e276ed666b9e7f" id=fa999211-2b82-4d9b-b0c8-dee5240fbfed name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 16:18:21 newest-cni-682353 crio[767]: time="2025-12-02T16:18:21.122935794Z" level=info msg="Started container" PID=2519 containerID=e4268fba4024133575d9de518a577b07791ddd97e93d187014e276ed666b9e7f description=kube-system/kube-proxy-srq78/kube-proxy id=fa999211-2b82-4d9b-b0c8-dee5240fbfed name=/runtime.v1.RuntimeService/StartContainer sandboxID=9f44c01d95bd551aba129c8c478be8a00b7a4e9dc7a8da9f7e2c21b408ab54d8
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	e4268fba40241       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   2 seconds ago       Running             kube-proxy                0                   9f44c01d95bd5       kube-proxy-srq78                            kube-system
	1780ab4453a95       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   11 seconds ago      Running             kube-apiserver            0                   4e441387ce278       kube-apiserver-newest-cni-682353            kube-system
	93edb25eb5fed       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   11 seconds ago      Running             kube-controller-manager   0                   939788663e520       kube-controller-manager-newest-cni-682353   kube-system
	d671ce9db79d6       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   11 seconds ago      Running             etcd                      0                   aa90a7389fa42       etcd-newest-cni-682353                      kube-system
	3f30e1ec9eaba       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   11 seconds ago      Running             kube-scheduler            0                   c7da1bf0dbaf4       kube-scheduler-newest-cni-682353            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-682353
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-682353
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=newest-cni-682353
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T16_18_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 16:18:13 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-682353
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 16:18:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 16:18:15 +0000   Tue, 02 Dec 2025 16:18:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 16:18:15 +0000   Tue, 02 Dec 2025 16:18:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 16:18:15 +0000   Tue, 02 Dec 2025 16:18:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 02 Dec 2025 16:18:15 +0000   Tue, 02 Dec 2025 16:18:11 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-682353
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                33d4fe74-dbd2-4001-8121-c4f8c133d3ca
	  Boot ID:                    e00bac56-b076-4861-bc22-5d3b11269f73
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-682353                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8s
	  kube-system                 kindnet-cxfrf                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-682353             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-682353    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-srq78                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-682353             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  4s    node-controller  Node newest-cni-682353 event: Registered Node newest-cni-682353 in Controller
	
	
	==> dmesg <==
	[  +0.000023] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[Dec 2 16:14] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ca bc 15 8e 4f 39 08 06
	[  +0.202375] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4a 25 86 21 45 76 08 06
	[  +7.441346] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 50 97 74 77 f9 08 06
	[  +0.000311] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 8c 8a 4d de f7 08 06
	[Dec 2 16:15] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 87 56 d2 46 1b 08 06
	[  +0.000909] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4a 25 86 21 45 76 08 06
	[  +7.449328] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a 06 ef 04 0a 22 08 06
	[ +17.731920] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff ae 8e 5c 48 83 60 08 06
	[  +2.165442] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0e 0b db fb 54 af 08 06
	[  +0.000320] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 3a 06 ef 04 0a 22 08 06
	[ +14.651928] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 5d 2d 15 78 ec 08 06
	[  +0.000385] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 8e 5c 48 83 60 08 06
	
	
	==> etcd [d671ce9db79d60b5c580a8e5cf057ba73b4cc55c16f96b66106a36c42515d474] <==
	{"level":"info","ts":"2025-12-02T16:18:13.385242Z","caller":"traceutil/trace.go:172","msg":"trace[2094640483] range","detail":"{range_begin:/registry/leases/kube-system/apiserver-6yvce4efoj4jmnbrfoaulvbofi; range_end:; response_count:0; response_revision:21; }","duration":"110.410666ms","start":"2025-12-02T16:18:13.274825Z","end":"2025-12-02T16:18:13.385236Z","steps":["trace[2094640483] 'agreement among raft nodes before linearized reading'  (duration: 110.372179ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T16:18:13.385236Z","caller":"traceutil/trace.go:172","msg":"trace[1136041896] transaction","detail":"{read_only:false; response_revision:18; number_of_response:1; }","duration":"186.648333ms","start":"2025-12-02T16:18:13.198578Z","end":"2025-12-02T16:18:13.385227Z","steps":["trace[1136041896] 'process raft request'  (duration: 186.216067ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T16:18:13.385284Z","caller":"traceutil/trace.go:172","msg":"trace[398692588] transaction","detail":"{read_only:false; response_revision:19; number_of_response:1; }","duration":"186.401488ms","start":"2025-12-02T16:18:13.198871Z","end":"2025-12-02T16:18:13.385273Z","steps":["trace[398692588] 'process raft request'  (duration: 185.961404ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T16:18:13.385181Z","caller":"traceutil/trace.go:172","msg":"trace[1517538888] transaction","detail":"{read_only:false; response_revision:16; number_of_response:1; }","duration":"187.58776ms","start":"2025-12-02T16:18:13.197589Z","end":"2025-12-02T16:18:13.385176Z","steps":["trace[1517538888] 'process raft request'  (duration: 187.127861ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T16:18:13.385379Z","caller":"traceutil/trace.go:172","msg":"trace[1622299864] transaction","detail":"{read_only:false; response_revision:21; number_of_response:1; }","duration":"140.349779ms","start":"2025-12-02T16:18:13.245021Z","end":"2025-12-02T16:18:13.385370Z","steps":["trace[1622299864] 'process raft request'  (duration: 139.87897ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-02T16:18:13.564917Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.685326ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/default\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-12-02T16:18:13.564991Z","caller":"traceutil/trace.go:172","msg":"trace[1659000080] range","detail":"{range_begin:/registry/namespaces/default; range_end:; response_count:0; response_revision:22; }","duration":"119.771616ms","start":"2025-12-02T16:18:13.445199Z","end":"2025-12-02T16:18:13.564970Z","steps":["trace[1659000080] 'agreement among raft nodes before linearized reading'  (duration: 93.961075ms)","trace[1659000080] 'range keys from in-memory index tree'  (duration: 25.705123ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-02T16:18:13.565008Z","caller":"traceutil/trace.go:172","msg":"trace[2004124336] transaction","detail":"{read_only:false; response_revision:23; number_of_response:1; }","duration":"172.699728ms","start":"2025-12-02T16:18:13.392290Z","end":"2025-12-02T16:18:13.564989Z","steps":["trace[2004124336] 'process raft request'  (duration: 146.913649ms)","trace[2004124336] 'compare'  (duration: 25.62884ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-02T16:18:13.565045Z","caller":"traceutil/trace.go:172","msg":"trace[1780208944] transaction","detail":"{read_only:false; response_revision:24; number_of_response:1; }","duration":"172.631074ms","start":"2025-12-02T16:18:13.392404Z","end":"2025-12-02T16:18:13.565035Z","steps":["trace[1780208944] 'process raft request'  (duration: 172.53671ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T16:18:13.565057Z","caller":"traceutil/trace.go:172","msg":"trace[268060178] transaction","detail":"{read_only:false; response_revision:25; number_of_response:1; }","duration":"172.557386ms","start":"2025-12-02T16:18:13.392485Z","end":"2025-12-02T16:18:13.565042Z","steps":["trace[268060178] 'process raft request'  (duration: 172.490937ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-02T16:18:13.564915Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.549259ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-12-02T16:18:13.565119Z","caller":"traceutil/trace.go:172","msg":"trace[1787556164] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:22; }","duration":"132.789136ms","start":"2025-12-02T16:18:13.432322Z","end":"2025-12-02T16:18:13.565111Z","steps":["trace[1787556164] 'agreement among raft nodes before linearized reading'  (duration: 106.823348ms)","trace[1787556164] 'range keys from in-memory index tree'  (duration: 25.690006ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-02T16:18:13.565128Z","caller":"traceutil/trace.go:172","msg":"trace[2006087676] transaction","detail":"{read_only:false; response_revision:26; number_of_response:1; }","duration":"172.593945ms","start":"2025-12-02T16:18:13.392524Z","end":"2025-12-02T16:18:13.565118Z","steps":["trace[2006087676] 'process raft request'  (duration: 172.473545ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T16:18:13.565202Z","caller":"traceutil/trace.go:172","msg":"trace[1372615316] transaction","detail":"{read_only:false; response_revision:28; number_of_response:1; }","duration":"172.594318ms","start":"2025-12-02T16:18:13.392599Z","end":"2025-12-02T16:18:13.565194Z","steps":["trace[1372615316] 'process raft request'  (duration: 172.433955ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T16:18:13.565120Z","caller":"traceutil/trace.go:172","msg":"trace[490038121] transaction","detail":"{read_only:false; response_revision:27; number_of_response:1; }","duration":"172.567085ms","start":"2025-12-02T16:18:13.392541Z","end":"2025-12-02T16:18:13.565108Z","steps":["trace[490038121] 'process raft request'  (duration: 172.475785ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T16:18:13.565288Z","caller":"traceutil/trace.go:172","msg":"trace[1878605358] transaction","detail":"{read_only:false; response_revision:30; number_of_response:1; }","duration":"167.028677ms","start":"2025-12-02T16:18:13.398251Z","end":"2025-12-02T16:18:13.565279Z","steps":["trace[1878605358] 'process raft request'  (duration: 166.839089ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-02T16:18:13.564915Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"172.768822ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/extension-apiserver-authentication\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-12-02T16:18:13.565395Z","caller":"traceutil/trace.go:172","msg":"trace[2115923528] range","detail":"{range_begin:/registry/configmaps/kube-system/extension-apiserver-authentication; range_end:; response_count:0; response_revision:22; }","duration":"173.268186ms","start":"2025-12-02T16:18:13.392111Z","end":"2025-12-02T16:18:13.565380Z","steps":["trace[2115923528] 'agreement among raft nodes before linearized reading'  (duration: 147.022365ms)","trace[2115923528] 'range keys from in-memory index tree'  (duration: 25.713857ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-02T16:18:13.565478Z","caller":"traceutil/trace.go:172","msg":"trace[2015683208] transaction","detail":"{read_only:false; response_revision:29; number_of_response:1; }","duration":"172.397511ms","start":"2025-12-02T16:18:13.393071Z","end":"2025-12-02T16:18:13.565468Z","steps":["trace[2015683208] 'process raft request'  (duration: 171.9828ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-02T16:18:13.564921Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"172.372154ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-12-02T16:18:13.566091Z","caller":"traceutil/trace.go:172","msg":"trace[976970519] range","detail":"{range_begin:/registry/certificatesigningrequests; range_end:; response_count:0; response_revision:22; }","duration":"173.541126ms","start":"2025-12-02T16:18:13.392538Z","end":"2025-12-02T16:18:13.566079Z","steps":["trace[976970519] 'agreement among raft nodes before linearized reading'  (duration: 146.628751ms)","trace[976970519] 'range keys from in-memory index tree'  (duration: 25.724408ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-02T16:18:13.583639Z","caller":"traceutil/trace.go:172","msg":"trace[1336375577] transaction","detail":"{read_only:false; response_revision:31; number_of_response:1; }","duration":"127.056416ms","start":"2025-12-02T16:18:13.456566Z","end":"2025-12-02T16:18:13.583622Z","steps":["trace[1336375577] 'process raft request'  (duration: 126.942812ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T16:18:13.774014Z","caller":"traceutil/trace.go:172","msg":"trace[353678695] transaction","detail":"{read_only:false; response_revision:42; number_of_response:1; }","duration":"124.010158ms","start":"2025-12-02T16:18:13.649984Z","end":"2025-12-02T16:18:13.773995Z","steps":["trace[353678695] 'process raft request'  (duration: 123.983163ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T16:18:13.774056Z","caller":"traceutil/trace.go:172","msg":"trace[517900657] transaction","detail":"{read_only:false; response_revision:41; number_of_response:1; }","duration":"124.960414ms","start":"2025-12-02T16:18:13.649071Z","end":"2025-12-02T16:18:13.774031Z","steps":["trace[517900657] 'process raft request'  (duration: 124.868438ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T16:18:13.774130Z","caller":"traceutil/trace.go:172","msg":"trace[1873185266] transaction","detail":"{read_only:false; response_revision:40; number_of_response:1; }","duration":"126.140075ms","start":"2025-12-02T16:18:13.647969Z","end":"2025-12-02T16:18:13.774109Z","steps":["trace[1873185266] 'process raft request'  (duration: 63.229243ms)","trace[1873185266] 'compare'  (duration: 62.624774ms)"],"step_count":2}
	
	
	==> kernel <==
	 16:18:23 up  3:00,  0 user,  load average: 3.79, 4.04, 2.73
	Linux newest-cni-682353 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [1780ab4453a95d88076dd1f111d4400af7a9957b81699ca060ce73b582d85190] <==
	I1202 16:18:13.115948       1 policy_source.go:248] refreshing policies
	E1202 16:18:13.131517       1 controller.go:156] "Error while syncing ConfigMap" err="namespaces \"kube-system\" not found" logger="UnhandledError" configmap="kube-system/kube-apiserver-legacy-service-account-token-tracking"
	I1202 16:18:13.179108       1 controller.go:667] quota admission added evaluator for: namespaces
	I1202 16:18:13.195140       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 16:18:13.195354       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	E1202 16:18:13.195417       1 controller.go:156] "Error while syncing ConfigMap" err="namespaces \"kube-system\" not found" logger="UnhandledError" configmap="kube-system/kube-apiserver-legacy-service-account-token-tracking"
	I1202 16:18:13.386729       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 16:18:13.388220       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 16:18:13.985334       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1202 16:18:13.990832       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1202 16:18:13.990853       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1202 16:18:14.533391       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 16:18:14.570655       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 16:18:14.686838       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1202 16:18:14.693539       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1202 16:18:14.694698       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 16:18:14.701534       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 16:18:15.009611       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 16:18:15.792239       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1202 16:18:15.802856       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1202 16:18:15.810103       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1202 16:18:20.613898       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1202 16:18:20.713835       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1202 16:18:20.977031       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 16:18:20.988849       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [93edb25eb5feddeec87c9dbf5e53324d938cc655b60a7d6102e805a6c76f428b] <==
	I1202 16:18:19.818986       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1202 16:18:19.818996       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 16:18:19.819003       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:19.819020       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:19.819077       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:19.819293       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:19.819300       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:19.819475       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:19.819502       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:19.819525       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:19.819535       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:19.819552       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:19.819592       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:19.819639       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:19.819681       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:19.819937       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:19.820569       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:19.820612       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:19.827371       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 16:18:19.828479       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:19.840981       1 range_allocator.go:433] "Set node PodCIDR" node="newest-cni-682353" podCIDRs=["10.42.0.0/24"]
	I1202 16:18:19.917435       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:19.917459       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1202 16:18:19.917465       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1202 16:18:19.928271       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [e4268fba4024133575d9de518a577b07791ddd97e93d187014e276ed666b9e7f] <==
	I1202 16:18:21.174839       1 server_linux.go:53] "Using iptables proxy"
	I1202 16:18:21.237236       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 16:18:21.338020       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:21.338067       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1202 16:18:21.338173       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 16:18:21.361176       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 16:18:21.361245       1 server_linux.go:136] "Using iptables Proxier"
	I1202 16:18:21.368100       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 16:18:21.368548       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1202 16:18:21.368571       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 16:18:21.370022       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 16:18:21.370127       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 16:18:21.370114       1 config.go:200] "Starting service config controller"
	I1202 16:18:21.370163       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 16:18:21.370186       1 config.go:309] "Starting node config controller"
	I1202 16:18:21.370197       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 16:18:21.370187       1 config.go:106] "Starting endpoint slice config controller"
	I1202 16:18:21.370222       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 16:18:21.470352       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1202 16:18:21.470366       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 16:18:21.470394       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 16:18:21.470456       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [3f30e1ec9eababf020d505720d782a2d8c812d1baa9878d584c1fc96b94d1797] <==
	E1202 16:18:13.883551       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope"
	E1202 16:18:13.884943       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1202 16:18:13.905515       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope"
	E1202 16:18:13.906880       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1202 16:18:13.973036       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope"
	E1202 16:18:13.974152       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1202 16:18:13.975367       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1202 16:18:13.976853       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1202 16:18:14.057930       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope"
	E1202 16:18:14.059461       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1202 16:18:14.092845       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope"
	E1202 16:18:14.094382       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1202 16:18:14.141903       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope"
	E1202 16:18:14.143057       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1202 16:18:14.172280       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope"
	E1202 16:18:14.173229       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1202 16:18:14.195487       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope"
	E1202 16:18:14.196568       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1202 16:18:14.203842       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope"
	E1202 16:18:14.204910       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1202 16:18:14.310685       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope"
	E1202 16:18:14.311644       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1202 16:18:14.354321       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope"
	E1202 16:18:14.355456       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	I1202 16:18:17.025577       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 02 16:18:16 newest-cni-682353 kubelet[2254]: E1202 16:18:16.667572    2254 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-682353" containerName="kube-controller-manager"
	Dec 02 16:18:16 newest-cni-682353 kubelet[2254]: I1202 16:18:16.694010    2254 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-682353" podStartSLOduration=1.693970999 podStartE2EDuration="1.693970999s" podCreationTimestamp="2025-12-02 16:18:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 16:18:16.679042368 +0000 UTC m=+1.140086269" watchObservedRunningTime="2025-12-02 16:18:16.693970999 +0000 UTC m=+1.155014899"
	Dec 02 16:18:16 newest-cni-682353 kubelet[2254]: I1202 16:18:16.706044    2254 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-682353" podStartSLOduration=1.7060209739999999 podStartE2EDuration="1.706020974s" podCreationTimestamp="2025-12-02 16:18:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 16:18:16.694328142 +0000 UTC m=+1.155372037" watchObservedRunningTime="2025-12-02 16:18:16.706020974 +0000 UTC m=+1.167064871"
	Dec 02 16:18:16 newest-cni-682353 kubelet[2254]: I1202 16:18:16.722230    2254 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-682353" podStartSLOduration=1.722207085 podStartE2EDuration="1.722207085s" podCreationTimestamp="2025-12-02 16:18:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 16:18:16.707926355 +0000 UTC m=+1.168970254" watchObservedRunningTime="2025-12-02 16:18:16.722207085 +0000 UTC m=+1.183250986"
	Dec 02 16:18:16 newest-cni-682353 kubelet[2254]: I1202 16:18:16.722626    2254 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-682353" podStartSLOduration=1.7226149849999999 podStartE2EDuration="1.722614985s" podCreationTimestamp="2025-12-02 16:18:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 16:18:16.722546055 +0000 UTC m=+1.183589955" watchObservedRunningTime="2025-12-02 16:18:16.722614985 +0000 UTC m=+1.183658885"
	Dec 02 16:18:17 newest-cni-682353 kubelet[2254]: E1202 16:18:17.653379    2254 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-682353" containerName="etcd"
	Dec 02 16:18:17 newest-cni-682353 kubelet[2254]: E1202 16:18:17.653560    2254 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-682353" containerName="kube-apiserver"
	Dec 02 16:18:17 newest-cni-682353 kubelet[2254]: E1202 16:18:17.653641    2254 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-682353" containerName="kube-scheduler"
	Dec 02 16:18:17 newest-cni-682353 kubelet[2254]: E1202 16:18:17.653825    2254 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-682353" containerName="kube-controller-manager"
	Dec 02 16:18:18 newest-cni-682353 kubelet[2254]: E1202 16:18:18.655768    2254 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-682353" containerName="kube-apiserver"
	Dec 02 16:18:18 newest-cni-682353 kubelet[2254]: E1202 16:18:18.655909    2254 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-682353" containerName="kube-scheduler"
	Dec 02 16:18:18 newest-cni-682353 kubelet[2254]: E1202 16:18:18.656154    2254 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-682353" containerName="etcd"
	Dec 02 16:18:19 newest-cni-682353 kubelet[2254]: E1202 16:18:19.657671    2254 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-682353" containerName="kube-scheduler"
	Dec 02 16:18:19 newest-cni-682353 kubelet[2254]: I1202 16:18:19.851999    2254 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 02 16:18:19 newest-cni-682353 kubelet[2254]: I1202 16:18:19.852876    2254 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 02 16:18:20 newest-cni-682353 kubelet[2254]: E1202 16:18:20.315145    2254 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-682353" containerName="kube-apiserver"
	Dec 02 16:18:20 newest-cni-682353 kubelet[2254]: I1202 16:18:20.750616    2254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlz7m\" (UniqueName: \"kubernetes.io/projected/6d9b68b3-fb87-47f4-887a-3b1851999e6c-kube-api-access-mlz7m\") pod \"kube-proxy-srq78\" (UID: \"6d9b68b3-fb87-47f4-887a-3b1851999e6c\") " pod="kube-system/kube-proxy-srq78"
	Dec 02 16:18:20 newest-cni-682353 kubelet[2254]: I1202 16:18:20.750730    2254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/164fac47-6c74-434b-b780-1ba1c2a40495-xtables-lock\") pod \"kindnet-cxfrf\" (UID: \"164fac47-6c74-434b-b780-1ba1c2a40495\") " pod="kube-system/kindnet-cxfrf"
	Dec 02 16:18:20 newest-cni-682353 kubelet[2254]: I1202 16:18:20.750780    2254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/164fac47-6c74-434b-b780-1ba1c2a40495-cni-cfg\") pod \"kindnet-cxfrf\" (UID: \"164fac47-6c74-434b-b780-1ba1c2a40495\") " pod="kube-system/kindnet-cxfrf"
	Dec 02 16:18:20 newest-cni-682353 kubelet[2254]: I1202 16:18:20.750962    2254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/164fac47-6c74-434b-b780-1ba1c2a40495-lib-modules\") pod \"kindnet-cxfrf\" (UID: \"164fac47-6c74-434b-b780-1ba1c2a40495\") " pod="kube-system/kindnet-cxfrf"
	Dec 02 16:18:20 newest-cni-682353 kubelet[2254]: I1202 16:18:20.751164    2254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6d9b68b3-fb87-47f4-887a-3b1851999e6c-kube-proxy\") pod \"kube-proxy-srq78\" (UID: \"6d9b68b3-fb87-47f4-887a-3b1851999e6c\") " pod="kube-system/kube-proxy-srq78"
	Dec 02 16:18:20 newest-cni-682353 kubelet[2254]: I1202 16:18:20.751232    2254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d9b68b3-fb87-47f4-887a-3b1851999e6c-xtables-lock\") pod \"kube-proxy-srq78\" (UID: \"6d9b68b3-fb87-47f4-887a-3b1851999e6c\") " pod="kube-system/kube-proxy-srq78"
	Dec 02 16:18:20 newest-cni-682353 kubelet[2254]: I1202 16:18:20.751261    2254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d9b68b3-fb87-47f4-887a-3b1851999e6c-lib-modules\") pod \"kube-proxy-srq78\" (UID: \"6d9b68b3-fb87-47f4-887a-3b1851999e6c\") " pod="kube-system/kube-proxy-srq78"
	Dec 02 16:18:20 newest-cni-682353 kubelet[2254]: I1202 16:18:20.751288    2254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knkpf\" (UniqueName: \"kubernetes.io/projected/164fac47-6c74-434b-b780-1ba1c2a40495-kube-api-access-knkpf\") pod \"kindnet-cxfrf\" (UID: \"164fac47-6c74-434b-b780-1ba1c2a40495\") " pod="kube-system/kindnet-cxfrf"
	Dec 02 16:18:21 newest-cni-682353 kubelet[2254]: I1202 16:18:21.674738    2254 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-srq78" podStartSLOduration=1.674718755 podStartE2EDuration="1.674718755s" podCreationTimestamp="2025-12-02 16:18:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 16:18:21.674615488 +0000 UTC m=+6.135659388" watchObservedRunningTime="2025-12-02 16:18:21.674718755 +0000 UTC m=+6.135762656"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-682353 -n newest-cni-682353
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-682353 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-7d764666f9-jb9wz storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-682353 describe pod coredns-7d764666f9-jb9wz storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-682353 describe pod coredns-7d764666f9-jb9wz storage-provisioner: exit status 1 (59.898993ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-jb9wz" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-682353 describe pod coredns-7d764666f9-jb9wz storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-806420 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-806420 --alsologtostderr -v=1: exit status 80 (2.406463905s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-806420 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 16:18:24.918813  631876 out.go:360] Setting OutFile to fd 1 ...
	I1202 16:18:24.918930  631876 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:18:24.918940  631876 out.go:374] Setting ErrFile to fd 2...
	I1202 16:18:24.918944  631876 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:18:24.919167  631876 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 16:18:24.919402  631876 out.go:368] Setting JSON to false
	I1202 16:18:24.919436  631876 mustload.go:66] Loading cluster: default-k8s-diff-port-806420
	I1202 16:18:24.919809  631876 config.go:182] Loaded profile config "default-k8s-diff-port-806420": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 16:18:24.920194  631876 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-806420 --format={{.State.Status}}
	I1202 16:18:24.938080  631876 host.go:66] Checking if "default-k8s-diff-port-806420" exists ...
	I1202 16:18:24.938370  631876 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 16:18:24.996373  631876 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:false NGoroutines:65 SystemTime:2025-12-02 16:18:24.986035542 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 16:18:24.997072  631876 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-806420 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1202 16:18:24.998930  631876 out.go:179] * Pausing node default-k8s-diff-port-806420 ... 
	I1202 16:18:25.000258  631876 host.go:66] Checking if "default-k8s-diff-port-806420" exists ...
	I1202 16:18:25.000545  631876 ssh_runner.go:195] Run: systemctl --version
	I1202 16:18:25.000592  631876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-806420
	I1202 16:18:25.019061  631876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/default-k8s-diff-port-806420/id_rsa Username:docker}
	I1202 16:18:25.117397  631876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 16:18:25.141121  631876 pause.go:52] kubelet running: true
	I1202 16:18:25.141192  631876 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 16:18:25.308303  631876 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 16:18:25.308385  631876 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 16:18:25.374042  631876 cri.go:89] found id: "01ff7a0935c00a17773bcff619702fa65a201219319ebec0982ffe9c505a8069"
	I1202 16:18:25.374066  631876 cri.go:89] found id: "cc0af4f512184e70ee469ee170f2fb1000845f2a1d89dd0df1b818272e15e846"
	I1202 16:18:25.374070  631876 cri.go:89] found id: "91fdfa7aaf4dd7592031b0da30d9eef5afa4036cab29173c60bf23797dbfd1e5"
	I1202 16:18:25.374073  631876 cri.go:89] found id: "97371be307c6073ea64886ffd7ed0b82e3b043b73ecbb945f6562db301c6048c"
	I1202 16:18:25.374076  631876 cri.go:89] found id: "70eac3d962ea7a373f8ad31de48465816e6e12bf1ea039d59bc2f11a8500f8d1"
	I1202 16:18:25.374081  631876 cri.go:89] found id: "dd7adc25ca0d8fd13c03d582eb1846e44e7ca31363dd13737dfcd8541ae71f4a"
	I1202 16:18:25.374084  631876 cri.go:89] found id: "85a4f9f063a689e0c01b71338ce33ac27c1c4ef5a601031762f5f6f8468c7949"
	I1202 16:18:25.374087  631876 cri.go:89] found id: "fa204ce25b4b750a274bec528d833933338cbebe536dd59bd13e8ef6cec0cb00"
	I1202 16:18:25.374090  631876 cri.go:89] found id: "e986fe28a3e21e60cd56299b5d31eb8159c847908a86b5e9049cff20903959aa"
	I1202 16:18:25.374106  631876 cri.go:89] found id: "81c9fef3a0179dc6cad2067a5c2d11cd35b328967b80abf2fdd9b8c439e0cff2"
	I1202 16:18:25.374110  631876 cri.go:89] found id: "f87dd838bd5e6c4c94fe2d031797c0e5265616b837be2d71816a40b69471ead9"
	I1202 16:18:25.374112  631876 cri.go:89] found id: ""
	I1202 16:18:25.374149  631876 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 16:18:25.385932  631876 retry.go:31] will retry after 167.633446ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T16:18:25Z" level=error msg="open /run/runc: no such file or directory"
	I1202 16:18:25.554378  631876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 16:18:25.568008  631876 pause.go:52] kubelet running: false
	I1202 16:18:25.568061  631876 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 16:18:25.738650  631876 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 16:18:25.738727  631876 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 16:18:25.810616  631876 cri.go:89] found id: "01ff7a0935c00a17773bcff619702fa65a201219319ebec0982ffe9c505a8069"
	I1202 16:18:25.810655  631876 cri.go:89] found id: "cc0af4f512184e70ee469ee170f2fb1000845f2a1d89dd0df1b818272e15e846"
	I1202 16:18:25.810659  631876 cri.go:89] found id: "91fdfa7aaf4dd7592031b0da30d9eef5afa4036cab29173c60bf23797dbfd1e5"
	I1202 16:18:25.810663  631876 cri.go:89] found id: "97371be307c6073ea64886ffd7ed0b82e3b043b73ecbb945f6562db301c6048c"
	I1202 16:18:25.810666  631876 cri.go:89] found id: "70eac3d962ea7a373f8ad31de48465816e6e12bf1ea039d59bc2f11a8500f8d1"
	I1202 16:18:25.810674  631876 cri.go:89] found id: "dd7adc25ca0d8fd13c03d582eb1846e44e7ca31363dd13737dfcd8541ae71f4a"
	I1202 16:18:25.810677  631876 cri.go:89] found id: "85a4f9f063a689e0c01b71338ce33ac27c1c4ef5a601031762f5f6f8468c7949"
	I1202 16:18:25.810687  631876 cri.go:89] found id: "fa204ce25b4b750a274bec528d833933338cbebe536dd59bd13e8ef6cec0cb00"
	I1202 16:18:25.810690  631876 cri.go:89] found id: "e986fe28a3e21e60cd56299b5d31eb8159c847908a86b5e9049cff20903959aa"
	I1202 16:18:25.810704  631876 cri.go:89] found id: "81c9fef3a0179dc6cad2067a5c2d11cd35b328967b80abf2fdd9b8c439e0cff2"
	I1202 16:18:25.810717  631876 cri.go:89] found id: "f87dd838bd5e6c4c94fe2d031797c0e5265616b837be2d71816a40b69471ead9"
	I1202 16:18:25.810720  631876 cri.go:89] found id: ""
	I1202 16:18:25.810774  631876 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 16:18:25.822748  631876 retry.go:31] will retry after 260.046964ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T16:18:25Z" level=error msg="open /run/runc: no such file or directory"
	I1202 16:18:26.083200  631876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 16:18:26.097386  631876 pause.go:52] kubelet running: false
	I1202 16:18:26.097486  631876 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 16:18:26.248891  631876 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 16:18:26.248991  631876 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 16:18:26.322918  631876 cri.go:89] found id: "01ff7a0935c00a17773bcff619702fa65a201219319ebec0982ffe9c505a8069"
	I1202 16:18:26.322955  631876 cri.go:89] found id: "cc0af4f512184e70ee469ee170f2fb1000845f2a1d89dd0df1b818272e15e846"
	I1202 16:18:26.322963  631876 cri.go:89] found id: "91fdfa7aaf4dd7592031b0da30d9eef5afa4036cab29173c60bf23797dbfd1e5"
	I1202 16:18:26.322969  631876 cri.go:89] found id: "97371be307c6073ea64886ffd7ed0b82e3b043b73ecbb945f6562db301c6048c"
	I1202 16:18:26.322974  631876 cri.go:89] found id: "70eac3d962ea7a373f8ad31de48465816e6e12bf1ea039d59bc2f11a8500f8d1"
	I1202 16:18:26.322979  631876 cri.go:89] found id: "dd7adc25ca0d8fd13c03d582eb1846e44e7ca31363dd13737dfcd8541ae71f4a"
	I1202 16:18:26.322984  631876 cri.go:89] found id: "85a4f9f063a689e0c01b71338ce33ac27c1c4ef5a601031762f5f6f8468c7949"
	I1202 16:18:26.322990  631876 cri.go:89] found id: "fa204ce25b4b750a274bec528d833933338cbebe536dd59bd13e8ef6cec0cb00"
	I1202 16:18:26.322995  631876 cri.go:89] found id: "e986fe28a3e21e60cd56299b5d31eb8159c847908a86b5e9049cff20903959aa"
	I1202 16:18:26.323020  631876 cri.go:89] found id: "81c9fef3a0179dc6cad2067a5c2d11cd35b328967b80abf2fdd9b8c439e0cff2"
	I1202 16:18:26.323028  631876 cri.go:89] found id: "f87dd838bd5e6c4c94fe2d031797c0e5265616b837be2d71816a40b69471ead9"
	I1202 16:18:26.323033  631876 cri.go:89] found id: ""
	I1202 16:18:26.323073  631876 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 16:18:26.336390  631876 retry.go:31] will retry after 630.563356ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T16:18:26Z" level=error msg="open /run/runc: no such file or directory"
	I1202 16:18:26.967180  631876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 16:18:26.980880  631876 pause.go:52] kubelet running: false
	I1202 16:18:26.980946  631876 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 16:18:27.156661  631876 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 16:18:27.156755  631876 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 16:18:27.235459  631876 cri.go:89] found id: "01ff7a0935c00a17773bcff619702fa65a201219319ebec0982ffe9c505a8069"
	I1202 16:18:27.235515  631876 cri.go:89] found id: "cc0af4f512184e70ee469ee170f2fb1000845f2a1d89dd0df1b818272e15e846"
	I1202 16:18:27.235522  631876 cri.go:89] found id: "91fdfa7aaf4dd7592031b0da30d9eef5afa4036cab29173c60bf23797dbfd1e5"
	I1202 16:18:27.235528  631876 cri.go:89] found id: "97371be307c6073ea64886ffd7ed0b82e3b043b73ecbb945f6562db301c6048c"
	I1202 16:18:27.235533  631876 cri.go:89] found id: "70eac3d962ea7a373f8ad31de48465816e6e12bf1ea039d59bc2f11a8500f8d1"
	I1202 16:18:27.235539  631876 cri.go:89] found id: "dd7adc25ca0d8fd13c03d582eb1846e44e7ca31363dd13737dfcd8541ae71f4a"
	I1202 16:18:27.235543  631876 cri.go:89] found id: "85a4f9f063a689e0c01b71338ce33ac27c1c4ef5a601031762f5f6f8468c7949"
	I1202 16:18:27.235548  631876 cri.go:89] found id: "fa204ce25b4b750a274bec528d833933338cbebe536dd59bd13e8ef6cec0cb00"
	I1202 16:18:27.235552  631876 cri.go:89] found id: "e986fe28a3e21e60cd56299b5d31eb8159c847908a86b5e9049cff20903959aa"
	I1202 16:18:27.235561  631876 cri.go:89] found id: "81c9fef3a0179dc6cad2067a5c2d11cd35b328967b80abf2fdd9b8c439e0cff2"
	I1202 16:18:27.235566  631876 cri.go:89] found id: "f87dd838bd5e6c4c94fe2d031797c0e5265616b837be2d71816a40b69471ead9"
	I1202 16:18:27.235571  631876 cri.go:89] found id: ""
	I1202 16:18:27.235636  631876 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 16:18:27.254525  631876 out.go:203] 
	W1202 16:18:27.255987  631876 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T16:18:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T16:18:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 16:18:27.256016  631876 out.go:285] * 
	* 
	W1202 16:18:27.261308  631876 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 16:18:27.262815  631876 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-806420 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-806420
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-806420:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "11de8b8d47119303090fb424ba8db00144940cdfc7fe8b446b8b50d3106ff09b",
	        "Created": "2025-12-02T16:16:19.182047028Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 617218,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T16:17:22.76605353Z",
	            "FinishedAt": "2025-12-02T16:17:21.666597736Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/11de8b8d47119303090fb424ba8db00144940cdfc7fe8b446b8b50d3106ff09b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/11de8b8d47119303090fb424ba8db00144940cdfc7fe8b446b8b50d3106ff09b/hostname",
	        "HostsPath": "/var/lib/docker/containers/11de8b8d47119303090fb424ba8db00144940cdfc7fe8b446b8b50d3106ff09b/hosts",
	        "LogPath": "/var/lib/docker/containers/11de8b8d47119303090fb424ba8db00144940cdfc7fe8b446b8b50d3106ff09b/11de8b8d47119303090fb424ba8db00144940cdfc7fe8b446b8b50d3106ff09b-json.log",
	        "Name": "/default-k8s-diff-port-806420",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-806420:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-806420",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "11de8b8d47119303090fb424ba8db00144940cdfc7fe8b446b8b50d3106ff09b",
	                "LowerDir": "/var/lib/docker/overlay2/798271a5d7d9be2b9f56abfd6664a7da5e191a27e4202bc76317776877650382-init/diff:/var/lib/docker/overlay2/ab98578cee54140c21ba2edb7c02601b9799fbaa027f05ce4daaae66d198c082/diff",
	                "MergedDir": "/var/lib/docker/overlay2/798271a5d7d9be2b9f56abfd6664a7da5e191a27e4202bc76317776877650382/merged",
	                "UpperDir": "/var/lib/docker/overlay2/798271a5d7d9be2b9f56abfd6664a7da5e191a27e4202bc76317776877650382/diff",
	                "WorkDir": "/var/lib/docker/overlay2/798271a5d7d9be2b9f56abfd6664a7da5e191a27e4202bc76317776877650382/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-806420",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-806420/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-806420",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-806420",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-806420",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "9b8cb76a451af9906e305019485d75ca8542e05d37c52b1a78aa9b03260e72a9",
	            "SandboxKey": "/var/run/docker/netns/9b8cb76a451a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33255"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33256"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33259"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33257"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33258"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-806420": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "71c0f0496cc56b89da0cbf1f1c56db8adab9c786627f80a5f88bceb2579ed18f",
	                    "EndpointID": "4f51b72426c6617cae47e2fdda00f505b01a2e4409f8f198ef7c23082a83c18d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "26:ee:81:d5:ee:08",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-806420",
	                        "11de8b8d4711"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-806420 -n default-k8s-diff-port-806420
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-806420 -n default-k8s-diff-port-806420: exit status 2 (339.390298ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-806420 logs -n 25
E1202 16:18:28.264332  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/auto-589300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:18:28.270767  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/auto-589300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:18:28.282236  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/auto-589300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:18:28.305564  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/auto-589300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:18:28.347585  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/auto-589300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:18:28.428912  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/auto-589300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:18:28.590896  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/auto-589300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-806420 logs -n 25: (1.152172451s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable metrics-server -p default-k8s-diff-port-806420 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-806420 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-046271 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ start   │ -p embed-certs-046271 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:18 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-806420 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ start   │ -p default-k8s-diff-port-806420 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:18 UTC │
	│ image   │ old-k8s-version-380588 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ pause   │ -p old-k8s-version-380588 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │                     │
	│ image   │ no-preload-534842 image list --format=json                                                                                                                                                                                                           │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ pause   │ -p no-preload-534842 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │                     │
	│ delete  │ -p old-k8s-version-380588                                                                                                                                                                                                                            │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ delete  │ -p no-preload-534842                                                                                                                                                                                                                                 │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ delete  │ -p old-k8s-version-380588                                                                                                                                                                                                                            │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ start   │ -p newest-cni-682353 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-682353            │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:18 UTC │
	│ delete  │ -p no-preload-534842                                                                                                                                                                                                                                 │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ image   │ embed-certs-046271 image list --format=json                                                                                                                                                                                                          │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │ 02 Dec 25 16:18 UTC │
	│ pause   │ -p embed-certs-046271 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │                     │
	│ delete  │ -p embed-certs-046271                                                                                                                                                                                                                                │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │ 02 Dec 25 16:18 UTC │
	│ addons  │ enable metrics-server -p newest-cni-682353 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-682353            │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │                     │
	│ delete  │ -p embed-certs-046271                                                                                                                                                                                                                                │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │ 02 Dec 25 16:18 UTC │
	│ stop    │ -p newest-cni-682353 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-682353            │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │ 02 Dec 25 16:18 UTC │
	│ image   │ default-k8s-diff-port-806420 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │ 02 Dec 25 16:18 UTC │
	│ pause   │ -p default-k8s-diff-port-806420 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-682353 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-682353            │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │ 02 Dec 25 16:18 UTC │
	│ start   │ -p newest-cni-682353 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-682353            │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 16:18:27
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 16:18:27.096789  632702 out.go:360] Setting OutFile to fd 1 ...
	I1202 16:18:27.096907  632702 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:18:27.096916  632702 out.go:374] Setting ErrFile to fd 2...
	I1202 16:18:27.096920  632702 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:18:27.097170  632702 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 16:18:27.097655  632702 out.go:368] Setting JSON to false
	I1202 16:18:27.098723  632702 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":10848,"bootTime":1764681459,"procs":263,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 16:18:27.098783  632702 start.go:143] virtualization: kvm guest
	I1202 16:18:27.100925  632702 out.go:179] * [newest-cni-682353] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 16:18:27.102297  632702 notify.go:221] Checking for updates...
	I1202 16:18:27.102310  632702 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 16:18:27.103490  632702 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 16:18:27.104790  632702 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 16:18:27.105915  632702 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-264555/.minikube
	I1202 16:18:27.106974  632702 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 16:18:27.108111  632702 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 16:18:27.109658  632702 config.go:182] Loaded profile config "newest-cni-682353": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 16:18:27.110228  632702 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 16:18:27.133604  632702 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 16:18:27.133775  632702 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 16:18:27.200970  632702 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 16:18:27.188090505 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 16:18:27.201072  632702 docker.go:319] overlay module found
	I1202 16:18:27.203028  632702 out.go:179] * Using the docker driver based on existing profile
	I1202 16:18:27.204448  632702 start.go:309] selected driver: docker
	I1202 16:18:27.204470  632702 start.go:927] validating driver "docker" against &{Name:newest-cni-682353 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-682353 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 16:18:27.204550  632702 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 16:18:27.205059  632702 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 16:18:27.272023  632702 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 16:18:27.259665242 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 16:18:27.272666  632702 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1202 16:18:27.272716  632702 cni.go:84] Creating CNI manager for ""
	I1202 16:18:27.272787  632702 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 16:18:27.272867  632702 start.go:353] cluster config:
	{Name:newest-cni-682353 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-682353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 16:18:27.277019  632702 out.go:179] * Starting "newest-cni-682353" primary control-plane node in "newest-cni-682353" cluster
	I1202 16:18:27.278537  632702 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 16:18:27.279898  632702 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	
	
	==> CRI-O <==
	Dec 02 16:17:44 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:17:44.555534131Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 02 16:17:44 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:17:44.559349213Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 02 16:17:44 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:17:44.559371153Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 02 16:17:57 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:17:57.822710005Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b0f9dae5-487d-4642-a69e-a8dd1813c7ac name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:17:57 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:17:57.864150681Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=faca2670-b042-4159-b238-1a2acbe8ab2e name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:17:57 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:17:57.903382281Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8mpm/dashboard-metrics-scraper" id=40a9b451-2833-4755-a080-d6f7d6964cb1 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:17:57 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:17:57.903568324Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:17:57 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:17:57.910547076Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:17:57 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:17:57.911025668Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:17:57 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:17:57.942848115Z" level=info msg="Created container 74e42e3441d83bdf5357dbc187f4e11a36f79cdadc5dee7f69beb385997735ab: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8mpm/dashboard-metrics-scraper" id=40a9b451-2833-4755-a080-d6f7d6964cb1 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:17:57 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:17:57.943525377Z" level=info msg="Starting container: 74e42e3441d83bdf5357dbc187f4e11a36f79cdadc5dee7f69beb385997735ab" id=88b7a245-1b8f-452d-8a20-601ef689bd33 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 16:17:57 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:17:57.945802033Z" level=info msg="Started container" PID=1798 containerID=74e42e3441d83bdf5357dbc187f4e11a36f79cdadc5dee7f69beb385997735ab description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8mpm/dashboard-metrics-scraper id=88b7a245-1b8f-452d-8a20-601ef689bd33 name=/runtime.v1.RuntimeService/StartContainer sandboxID=76cb888ad6da939055fe61c5e1bdfc093ea4c45c8ebca043c39ca2eb40ad831e
	Dec 02 16:17:59 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:17:59.001972483Z" level=info msg="Removing container: 9e9ad99d8fce4dd52eca8dc0b4e0388360b1bdf09a84f5c51ca4a69fba742be2" id=5380310e-6cfb-44e1-a241-9f202783ae64 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 16:17:59 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:17:59.144298136Z" level=info msg="Removed container 9e9ad99d8fce4dd52eca8dc0b4e0388360b1bdf09a84f5c51ca4a69fba742be2: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8mpm/dashboard-metrics-scraper" id=5380310e-6cfb-44e1-a241-9f202783ae64 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 16:18:18 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:18:18.821667554Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=bb820137-0946-4cfa-8ef1-38fffb4c268e name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:18:18 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:18:18.822865747Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=4fe33f7d-5e30-44f9-a190-b8ea2e465713 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:18:18 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:18:18.824118065Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8mpm/dashboard-metrics-scraper" id=b230362f-351c-4bac-b701-75a5be979a8c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:18:18 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:18:18.824262693Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:18:18 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:18:18.831111258Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:18:18 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:18:18.831635114Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:18:18 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:18:18.861076534Z" level=info msg="Created container 81c9fef3a0179dc6cad2067a5c2d11cd35b328967b80abf2fdd9b8c439e0cff2: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8mpm/dashboard-metrics-scraper" id=b230362f-351c-4bac-b701-75a5be979a8c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:18:18 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:18:18.861824938Z" level=info msg="Starting container: 81c9fef3a0179dc6cad2067a5c2d11cd35b328967b80abf2fdd9b8c439e0cff2" id=d9f1d971-445c-4b0a-91a3-0658955bb112 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 16:18:18 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:18:18.864372766Z" level=info msg="Started container" PID=1834 containerID=81c9fef3a0179dc6cad2067a5c2d11cd35b328967b80abf2fdd9b8c439e0cff2 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8mpm/dashboard-metrics-scraper id=d9f1d971-445c-4b0a-91a3-0658955bb112 name=/runtime.v1.RuntimeService/StartContainer sandboxID=76cb888ad6da939055fe61c5e1bdfc093ea4c45c8ebca043c39ca2eb40ad831e
	Dec 02 16:18:19 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:18:19.038974996Z" level=info msg="Removing container: 74e42e3441d83bdf5357dbc187f4e11a36f79cdadc5dee7f69beb385997735ab" id=13a4339a-6741-44c6-b792-bdeb30728e01 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 16:18:19 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:18:19.048977677Z" level=info msg="Removed container 74e42e3441d83bdf5357dbc187f4e11a36f79cdadc5dee7f69beb385997735ab: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8mpm/dashboard-metrics-scraper" id=13a4339a-6741-44c6-b792-bdeb30728e01 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	81c9fef3a0179       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           9 seconds ago       Exited              dashboard-metrics-scraper   3                   76cb888ad6da9       dashboard-metrics-scraper-6ffb444bf9-n8mpm             kubernetes-dashboard
	f87dd838bd5e6       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   44 seconds ago      Running             kubernetes-dashboard        0                   f52e93bbc5b6a       kubernetes-dashboard-855c9754f9-q97zr                  kubernetes-dashboard
	01ff7a0935c00       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Running             storage-provisioner         1                   56825b27afd3b       storage-provisioner                                    kube-system
	cc0af4f512184       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           54 seconds ago      Running             coredns                     0                   9b047f81a7dae       coredns-66bc5c9577-6h6nr                               kube-system
	ef8610895132d       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   ffc9afd85d8fd       busybox                                                default
	91fdfa7aaf4dd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   56825b27afd3b       storage-provisioner                                    kube-system
	97371be307c60       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           54 seconds ago      Running             kube-proxy                  0                   b6a21cb626e74       kube-proxy-574km                                       kube-system
	70eac3d962ea7       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   13eacbfbea256       kindnet-pc8st                                          kube-system
	dd7adc25ca0d8       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           58 seconds ago      Running             kube-controller-manager     0                   d2a6b3701e89a       kube-controller-manager-default-k8s-diff-port-806420   kube-system
	85a4f9f063a68       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           58 seconds ago      Running             etcd                        0                   99171badb4618       etcd-default-k8s-diff-port-806420                      kube-system
	fa204ce25b4b7       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           58 seconds ago      Running             kube-scheduler              0                   3c47c0674e5a0       kube-scheduler-default-k8s-diff-port-806420            kube-system
	e986fe28a3e21       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           58 seconds ago      Running             kube-apiserver              0                   ba2566e6ac030       kube-apiserver-default-k8s-diff-port-806420            kube-system
	
	
	==> coredns [cc0af4f512184e70ee469ee170f2fb1000845f2a1d89dd0df1b818272e15e846] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54493 - 52971 "HINFO IN 4338479738599001029.4103854806184096063. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.023431567s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-806420
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-806420
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=default-k8s-diff-port-806420
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T16_16_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 16:16:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-806420
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 16:18:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 16:18:03 +0000   Tue, 02 Dec 2025 16:16:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 16:18:03 +0000   Tue, 02 Dec 2025 16:16:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 16:18:03 +0000   Tue, 02 Dec 2025 16:16:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 16:18:03 +0000   Tue, 02 Dec 2025 16:16:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-806420
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                48c4c192-0280-419c-8cb9-032c0b3b12b9
	  Boot ID:                    e00bac56-b076-4861-bc22-5d3b11269f73
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-6h6nr                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-default-k8s-diff-port-806420                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         114s
	  kube-system                 kindnet-pc8st                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-default-k8s-diff-port-806420             250m (3%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-806420    200m (2%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-574km                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-default-k8s-diff-port-806420             100m (1%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-n8mpm              0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-q97zr                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 108s               kube-proxy       
	  Normal  Starting                 54s                kube-proxy       
	  Normal  Starting                 115s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  114s               kubelet          Node default-k8s-diff-port-806420 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    114s               kubelet          Node default-k8s-diff-port-806420 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     114s               kubelet          Node default-k8s-diff-port-806420 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           110s               node-controller  Node default-k8s-diff-port-806420 event: Registered Node default-k8s-diff-port-806420 in Controller
	  Normal  NodeReady                98s                kubelet          Node default-k8s-diff-port-806420 status is now: NodeReady
	  Normal  Starting                 59s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)  kubelet          Node default-k8s-diff-port-806420 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)  kubelet          Node default-k8s-diff-port-806420 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 59s)  kubelet          Node default-k8s-diff-port-806420 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           52s                node-controller  Node default-k8s-diff-port-806420 event: Registered Node default-k8s-diff-port-806420 in Controller
	
	
	==> dmesg <==
	[  +0.000023] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[Dec 2 16:14] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ca bc 15 8e 4f 39 08 06
	[  +0.202375] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4a 25 86 21 45 76 08 06
	[  +7.441346] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 50 97 74 77 f9 08 06
	[  +0.000311] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 8c 8a 4d de f7 08 06
	[Dec 2 16:15] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 87 56 d2 46 1b 08 06
	[  +0.000909] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4a 25 86 21 45 76 08 06
	[  +7.449328] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a 06 ef 04 0a 22 08 06
	[ +17.731920] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff ae 8e 5c 48 83 60 08 06
	[  +2.165442] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0e 0b db fb 54 af 08 06
	[  +0.000320] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 3a 06 ef 04 0a 22 08 06
	[ +14.651928] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 5d 2d 15 78 ec 08 06
	[  +0.000385] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 8e 5c 48 83 60 08 06
	
	
	==> etcd [85a4f9f063a689e0c01b71338ce33ac27c1c4ef5a601031762f5f6f8468c7949] <==
	{"level":"warn","ts":"2025-12-02T16:17:31.963407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:31.978948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:31.984846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:31.995024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:32.005230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:32.016318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:32.027599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:32.038023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:32.047664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:32.057094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:32.065262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:32.080035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:32.100145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:32.111589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:32.122916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:32.133364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:32.141970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:32.150946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:32.170528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:32.174596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:32.183991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:32.270815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57378","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-02T16:17:59.144188Z","caller":"traceutil/trace.go:172","msg":"trace[1666271433] transaction","detail":"{read_only:false; response_revision:622; number_of_response:1; }","duration":"115.468996ms","start":"2025-12-02T16:17:59.028705Z","end":"2025-12-02T16:17:59.144174Z","steps":["trace[1666271433] 'process raft request'  (duration: 115.433668ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T16:17:59.144181Z","caller":"traceutil/trace.go:172","msg":"trace[1987990892] transaction","detail":"{read_only:false; response_revision:620; number_of_response:1; }","duration":"115.945489ms","start":"2025-12-02T16:17:59.028209Z","end":"2025-12-02T16:17:59.144154Z","steps":["trace[1987990892] 'process raft request'  (duration: 37.558851ms)","trace[1987990892] 'compare'  (duration: 78.21982ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-02T16:17:59.144256Z","caller":"traceutil/trace.go:172","msg":"trace[2034242801] transaction","detail":"{read_only:false; response_revision:621; number_of_response:1; }","duration":"115.979781ms","start":"2025-12-02T16:17:59.028254Z","end":"2025-12-02T16:17:59.144233Z","steps":["trace[2034242801] 'process raft request'  (duration: 115.84364ms)"],"step_count":1}
	
	
	==> kernel <==
	 16:18:28 up  3:00,  0 user,  load average: 3.65, 4.01, 2.73
	Linux default-k8s-diff-port-806420 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [70eac3d962ea7a373f8ad31de48465816e6e12bf1ea039d59bc2f11a8500f8d1] <==
	I1202 16:17:34.332049       1 main.go:148] setting mtu 1500 for CNI 
	I1202 16:17:34.332064       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 16:17:34.332084       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T16:17:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 16:17:34.534167       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 16:17:34.534502       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 16:17:34.534544       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 16:17:34.534744       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1202 16:17:34.535217       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1202 16:17:34.535251       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1202 16:17:34.535281       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1202 16:17:34.535369       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1202 16:17:36.135130       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 16:17:36.135161       1 metrics.go:72] Registering metrics
	I1202 16:17:36.135213       1 controller.go:711] "Syncing nftables rules"
	I1202 16:17:44.534629       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1202 16:17:44.534701       1 main.go:301] handling current node
	I1202 16:17:54.537683       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1202 16:17:54.537724       1 main.go:301] handling current node
	I1202 16:18:04.534045       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1202 16:18:04.534085       1 main.go:301] handling current node
	I1202 16:18:14.535529       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1202 16:18:14.535561       1 main.go:301] handling current node
	I1202 16:18:24.534601       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1202 16:18:24.534662       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e986fe28a3e21e60cd56299b5d31eb8159c847908a86b5e9049cff20903959aa] <==
	I1202 16:17:32.956145       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1202 16:17:32.956231       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1202 16:17:32.956356       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1202 16:17:32.958877       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1202 16:17:32.958902       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1202 16:17:32.958922       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1202 16:17:32.959162       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1202 16:17:32.965462       1 aggregator.go:171] initial CRD sync complete...
	I1202 16:17:32.965924       1 autoregister_controller.go:144] Starting autoregister controller
	I1202 16:17:32.965985       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1202 16:17:32.966015       1 cache.go:39] Caches are synced for autoregister controller
	I1202 16:17:32.970678       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1202 16:17:32.989701       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 16:17:33.006323       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 16:17:33.402308       1 controller.go:667] quota admission added evaluator for: namespaces
	I1202 16:17:33.434546       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1202 16:17:33.458143       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 16:17:33.467689       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 16:17:33.473899       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 16:17:33.508765       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.144.140"}
	I1202 16:17:33.520208       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.95.32"}
	I1202 16:17:33.856506       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1202 16:17:36.712636       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1202 16:17:36.762164       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 16:17:36.859959       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [dd7adc25ca0d8fd13c03d582eb1846e44e7ca31363dd13737dfcd8541ae71f4a] <==
	I1202 16:17:36.297463       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1202 16:17:36.307156       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1202 16:17:36.307256       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1202 16:17:36.307270       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1202 16:17:36.307365       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1202 16:17:36.307392       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1202 16:17:36.307555       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-806420"
	I1202 16:17:36.307631       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1202 16:17:36.307753       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1202 16:17:36.307752       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1202 16:17:36.307853       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1202 16:17:36.308022       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1202 16:17:36.308046       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1202 16:17:36.308272       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1202 16:17:36.308683       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1202 16:17:36.309648       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1202 16:17:36.309664       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1202 16:17:36.311919       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1202 16:17:36.313194       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 16:17:36.313750       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1202 16:17:36.315903       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1202 16:17:36.318135       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1202 16:17:36.321404       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1202 16:17:36.323607       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1202 16:17:36.330831       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [97371be307c6073ea64886ffd7ed0b82e3b043b73ecbb945f6562db301c6048c] <==
	I1202 16:17:34.216933       1 server_linux.go:53] "Using iptables proxy"
	I1202 16:17:34.285270       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 16:17:34.385411       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 16:17:34.385493       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1202 16:17:34.385623       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 16:17:34.403952       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 16:17:34.404015       1 server_linux.go:132] "Using iptables Proxier"
	I1202 16:17:34.409083       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 16:17:34.410216       1 server.go:527] "Version info" version="v1.34.2"
	I1202 16:17:34.410299       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 16:17:34.412399       1 config.go:309] "Starting node config controller"
	I1202 16:17:34.412465       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 16:17:34.412503       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 16:17:34.412550       1 config.go:200] "Starting service config controller"
	I1202 16:17:34.412567       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 16:17:34.412561       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 16:17:34.412593       1 config.go:106] "Starting endpoint slice config controller"
	I1202 16:17:34.412645       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 16:17:34.412530       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 16:17:34.513604       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1202 16:17:34.513626       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 16:17:34.513679       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [fa204ce25b4b750a274bec528d833933338cbebe536dd59bd13e8ef6cec0cb00] <==
	I1202 16:17:31.773982       1 serving.go:386] Generated self-signed cert in-memory
	W1202 16:17:32.889746       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1202 16:17:32.889787       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1202 16:17:32.889800       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1202 16:17:32.889819       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1202 16:17:32.944848       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1202 16:17:32.946220       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 16:17:32.952678       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1202 16:17:32.952780       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 16:17:32.955267       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 16:17:32.952804       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1202 16:17:33.055569       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 02 16:17:40 default-k8s-diff-port-806420 kubelet[738]: I1202 16:17:40.917272     738 scope.go:117] "RemoveContainer" containerID="4bd951adc946de032954a0348377441c62c4ff165d97b8e3100bf799a39d0a6c"
	Dec 02 16:17:40 default-k8s-diff-port-806420 kubelet[738]: I1202 16:17:40.917497     738 scope.go:117] "RemoveContainer" containerID="9e9ad99d8fce4dd52eca8dc0b4e0388360b1bdf09a84f5c51ca4a69fba742be2"
	Dec 02 16:17:40 default-k8s-diff-port-806420 kubelet[738]: E1202 16:17:40.917765     738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-n8mpm_kubernetes-dashboard(fa137698-f778-48e5-b744-3584b36e2f95)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8mpm" podUID="fa137698-f778-48e5-b744-3584b36e2f95"
	Dec 02 16:17:41 default-k8s-diff-port-806420 kubelet[738]: I1202 16:17:41.398241     738 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 02 16:17:41 default-k8s-diff-port-806420 kubelet[738]: I1202 16:17:41.921652     738 scope.go:117] "RemoveContainer" containerID="9e9ad99d8fce4dd52eca8dc0b4e0388360b1bdf09a84f5c51ca4a69fba742be2"
	Dec 02 16:17:41 default-k8s-diff-port-806420 kubelet[738]: E1202 16:17:41.921848     738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-n8mpm_kubernetes-dashboard(fa137698-f778-48e5-b744-3584b36e2f95)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8mpm" podUID="fa137698-f778-48e5-b744-3584b36e2f95"
	Dec 02 16:17:43 default-k8s-diff-port-806420 kubelet[738]: I1202 16:17:43.939714     738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-q97zr" podStartSLOduration=1.730232567 podStartE2EDuration="7.939688428s" podCreationTimestamp="2025-12-02 16:17:36 +0000 UTC" firstStartedPulling="2025-12-02 16:17:37.161858748 +0000 UTC m=+7.441287980" lastFinishedPulling="2025-12-02 16:17:43.371314623 +0000 UTC m=+13.650743841" observedRunningTime="2025-12-02 16:17:43.939142426 +0000 UTC m=+14.218571666" watchObservedRunningTime="2025-12-02 16:17:43.939688428 +0000 UTC m=+14.219117665"
	Dec 02 16:17:44 default-k8s-diff-port-806420 kubelet[738]: I1202 16:17:44.214124     738 scope.go:117] "RemoveContainer" containerID="9e9ad99d8fce4dd52eca8dc0b4e0388360b1bdf09a84f5c51ca4a69fba742be2"
	Dec 02 16:17:44 default-k8s-diff-port-806420 kubelet[738]: E1202 16:17:44.214372     738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-n8mpm_kubernetes-dashboard(fa137698-f778-48e5-b744-3584b36e2f95)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8mpm" podUID="fa137698-f778-48e5-b744-3584b36e2f95"
	Dec 02 16:17:57 default-k8s-diff-port-806420 kubelet[738]: I1202 16:17:57.821502     738 scope.go:117] "RemoveContainer" containerID="9e9ad99d8fce4dd52eca8dc0b4e0388360b1bdf09a84f5c51ca4a69fba742be2"
	Dec 02 16:17:58 default-k8s-diff-port-806420 kubelet[738]: I1202 16:17:58.979270     738 scope.go:117] "RemoveContainer" containerID="9e9ad99d8fce4dd52eca8dc0b4e0388360b1bdf09a84f5c51ca4a69fba742be2"
	Dec 02 16:17:58 default-k8s-diff-port-806420 kubelet[738]: I1202 16:17:58.979494     738 scope.go:117] "RemoveContainer" containerID="74e42e3441d83bdf5357dbc187f4e11a36f79cdadc5dee7f69beb385997735ab"
	Dec 02 16:17:58 default-k8s-diff-port-806420 kubelet[738]: E1202 16:17:58.979724     738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-n8mpm_kubernetes-dashboard(fa137698-f778-48e5-b744-3584b36e2f95)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8mpm" podUID="fa137698-f778-48e5-b744-3584b36e2f95"
	Dec 02 16:18:04 default-k8s-diff-port-806420 kubelet[738]: I1202 16:18:04.214547     738 scope.go:117] "RemoveContainer" containerID="74e42e3441d83bdf5357dbc187f4e11a36f79cdadc5dee7f69beb385997735ab"
	Dec 02 16:18:04 default-k8s-diff-port-806420 kubelet[738]: E1202 16:18:04.214829     738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-n8mpm_kubernetes-dashboard(fa137698-f778-48e5-b744-3584b36e2f95)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8mpm" podUID="fa137698-f778-48e5-b744-3584b36e2f95"
	Dec 02 16:18:18 default-k8s-diff-port-806420 kubelet[738]: I1202 16:18:18.821009     738 scope.go:117] "RemoveContainer" containerID="74e42e3441d83bdf5357dbc187f4e11a36f79cdadc5dee7f69beb385997735ab"
	Dec 02 16:18:19 default-k8s-diff-port-806420 kubelet[738]: I1202 16:18:19.037682     738 scope.go:117] "RemoveContainer" containerID="74e42e3441d83bdf5357dbc187f4e11a36f79cdadc5dee7f69beb385997735ab"
	Dec 02 16:18:19 default-k8s-diff-port-806420 kubelet[738]: I1202 16:18:19.037948     738 scope.go:117] "RemoveContainer" containerID="81c9fef3a0179dc6cad2067a5c2d11cd35b328967b80abf2fdd9b8c439e0cff2"
	Dec 02 16:18:19 default-k8s-diff-port-806420 kubelet[738]: E1202 16:18:19.038155     738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-n8mpm_kubernetes-dashboard(fa137698-f778-48e5-b744-3584b36e2f95)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8mpm" podUID="fa137698-f778-48e5-b744-3584b36e2f95"
	Dec 02 16:18:24 default-k8s-diff-port-806420 kubelet[738]: I1202 16:18:24.215103     738 scope.go:117] "RemoveContainer" containerID="81c9fef3a0179dc6cad2067a5c2d11cd35b328967b80abf2fdd9b8c439e0cff2"
	Dec 02 16:18:24 default-k8s-diff-port-806420 kubelet[738]: E1202 16:18:24.215333     738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-n8mpm_kubernetes-dashboard(fa137698-f778-48e5-b744-3584b36e2f95)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8mpm" podUID="fa137698-f778-48e5-b744-3584b36e2f95"
	Dec 02 16:18:25 default-k8s-diff-port-806420 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 02 16:18:25 default-k8s-diff-port-806420 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 02 16:18:25 default-k8s-diff-port-806420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 16:18:25 default-k8s-diff-port-806420 systemd[1]: kubelet.service: Consumed 1.858s CPU time.
	
	
	==> kubernetes-dashboard [f87dd838bd5e6c4c94fe2d031797c0e5265616b837be2d71816a40b69471ead9] <==
	2025/12/02 16:17:43 Starting overwatch
	2025/12/02 16:17:43 Using namespace: kubernetes-dashboard
	2025/12/02 16:17:43 Using in-cluster config to connect to apiserver
	2025/12/02 16:17:43 Using secret token for csrf signing
	2025/12/02 16:17:43 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/02 16:17:43 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/02 16:17:43 Successful initial request to the apiserver, version: v1.34.2
	2025/12/02 16:17:43 Generating JWE encryption key
	2025/12/02 16:17:43 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/02 16:17:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/02 16:17:43 Initializing JWE encryption key from synchronized object
	2025/12/02 16:17:43 Creating in-cluster Sidecar client
	2025/12/02 16:17:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/02 16:17:43 Serving insecurely on HTTP port: 9090
	2025/12/02 16:18:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [01ff7a0935c00a17773bcff619702fa65a201219319ebec0982ffe9c505a8069] <==
	W1202 16:18:04.415634       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:06.419524       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:06.424315       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:08.427150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:08.434148       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:10.437086       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:10.441296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:12.444215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:12.450165       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:14.453563       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:14.459059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:16.463515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:16.468401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:18.472411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:18.477753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:20.481515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:20.485707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:22.491397       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:22.496396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:24.499213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:24.503040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:26.506832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:26.510604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:28.513590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:28.518641       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [91fdfa7aaf4dd7592031b0da30d9eef5afa4036cab29173c60bf23797dbfd1e5] <==
	I1202 16:17:34.184746       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1202 16:17:34.189048       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-806420 -n default-k8s-diff-port-806420
E1202 16:18:28.912613  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/auto-589300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-806420 -n default-k8s-diff-port-806420: exit status 2 (338.763602ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-806420 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-806420
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-806420:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "11de8b8d47119303090fb424ba8db00144940cdfc7fe8b446b8b50d3106ff09b",
	        "Created": "2025-12-02T16:16:19.182047028Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 617218,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T16:17:22.76605353Z",
	            "FinishedAt": "2025-12-02T16:17:21.666597736Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/11de8b8d47119303090fb424ba8db00144940cdfc7fe8b446b8b50d3106ff09b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/11de8b8d47119303090fb424ba8db00144940cdfc7fe8b446b8b50d3106ff09b/hostname",
	        "HostsPath": "/var/lib/docker/containers/11de8b8d47119303090fb424ba8db00144940cdfc7fe8b446b8b50d3106ff09b/hosts",
	        "LogPath": "/var/lib/docker/containers/11de8b8d47119303090fb424ba8db00144940cdfc7fe8b446b8b50d3106ff09b/11de8b8d47119303090fb424ba8db00144940cdfc7fe8b446b8b50d3106ff09b-json.log",
	        "Name": "/default-k8s-diff-port-806420",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-806420:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-806420",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "11de8b8d47119303090fb424ba8db00144940cdfc7fe8b446b8b50d3106ff09b",
	                "LowerDir": "/var/lib/docker/overlay2/798271a5d7d9be2b9f56abfd6664a7da5e191a27e4202bc76317776877650382-init/diff:/var/lib/docker/overlay2/ab98578cee54140c21ba2edb7c02601b9799fbaa027f05ce4daaae66d198c082/diff",
	                "MergedDir": "/var/lib/docker/overlay2/798271a5d7d9be2b9f56abfd6664a7da5e191a27e4202bc76317776877650382/merged",
	                "UpperDir": "/var/lib/docker/overlay2/798271a5d7d9be2b9f56abfd6664a7da5e191a27e4202bc76317776877650382/diff",
	                "WorkDir": "/var/lib/docker/overlay2/798271a5d7d9be2b9f56abfd6664a7da5e191a27e4202bc76317776877650382/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-806420",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-806420/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-806420",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-806420",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-806420",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "9b8cb76a451af9906e305019485d75ca8542e05d37c52b1a78aa9b03260e72a9",
	            "SandboxKey": "/var/run/docker/netns/9b8cb76a451a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33255"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33256"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33259"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33257"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33258"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-806420": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "71c0f0496cc56b89da0cbf1f1c56db8adab9c786627f80a5f88bceb2579ed18f",
	                    "EndpointID": "4f51b72426c6617cae47e2fdda00f505b01a2e4409f8f198ef7c23082a83c18d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "26:ee:81:d5:ee:08",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-806420",
	                        "11de8b8d4711"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-806420 -n default-k8s-diff-port-806420
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-806420 -n default-k8s-diff-port-806420: exit status 2 (325.181708ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-806420 logs -n 25
E1202 16:18:29.554867  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/auto-589300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-806420 logs -n 25: (1.061998586s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable metrics-server -p default-k8s-diff-port-806420 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-806420 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-046271 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ start   │ -p embed-certs-046271 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:18 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-806420 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ start   │ -p default-k8s-diff-port-806420 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:18 UTC │
	│ image   │ old-k8s-version-380588 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ pause   │ -p old-k8s-version-380588 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │                     │
	│ image   │ no-preload-534842 image list --format=json                                                                                                                                                                                                           │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ pause   │ -p no-preload-534842 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │                     │
	│ delete  │ -p old-k8s-version-380588                                                                                                                                                                                                                            │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ delete  │ -p no-preload-534842                                                                                                                                                                                                                                 │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ delete  │ -p old-k8s-version-380588                                                                                                                                                                                                                            │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ start   │ -p newest-cni-682353 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-682353            │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:18 UTC │
	│ delete  │ -p no-preload-534842                                                                                                                                                                                                                                 │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ image   │ embed-certs-046271 image list --format=json                                                                                                                                                                                                          │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │ 02 Dec 25 16:18 UTC │
	│ pause   │ -p embed-certs-046271 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │                     │
	│ delete  │ -p embed-certs-046271                                                                                                                                                                                                                                │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │ 02 Dec 25 16:18 UTC │
	│ addons  │ enable metrics-server -p newest-cni-682353 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-682353            │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │                     │
	│ delete  │ -p embed-certs-046271                                                                                                                                                                                                                                │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │ 02 Dec 25 16:18 UTC │
	│ stop    │ -p newest-cni-682353 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-682353            │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │ 02 Dec 25 16:18 UTC │
	│ image   │ default-k8s-diff-port-806420 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │ 02 Dec 25 16:18 UTC │
	│ pause   │ -p default-k8s-diff-port-806420 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-682353 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-682353            │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │ 02 Dec 25 16:18 UTC │
	│ start   │ -p newest-cni-682353 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-682353            │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 16:18:27
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 16:18:27.096789  632702 out.go:360] Setting OutFile to fd 1 ...
	I1202 16:18:27.096907  632702 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:18:27.096916  632702 out.go:374] Setting ErrFile to fd 2...
	I1202 16:18:27.096920  632702 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:18:27.097170  632702 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 16:18:27.097655  632702 out.go:368] Setting JSON to false
	I1202 16:18:27.098723  632702 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":10848,"bootTime":1764681459,"procs":263,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 16:18:27.098783  632702 start.go:143] virtualization: kvm guest
	I1202 16:18:27.100925  632702 out.go:179] * [newest-cni-682353] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 16:18:27.102297  632702 notify.go:221] Checking for updates...
	I1202 16:18:27.102310  632702 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 16:18:27.103490  632702 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 16:18:27.104790  632702 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 16:18:27.105915  632702 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-264555/.minikube
	I1202 16:18:27.106974  632702 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 16:18:27.108111  632702 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 16:18:27.109658  632702 config.go:182] Loaded profile config "newest-cni-682353": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 16:18:27.110228  632702 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 16:18:27.133604  632702 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 16:18:27.133775  632702 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 16:18:27.200970  632702 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 16:18:27.188090505 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 16:18:27.201072  632702 docker.go:319] overlay module found
	I1202 16:18:27.203028  632702 out.go:179] * Using the docker driver based on existing profile
	I1202 16:18:27.204448  632702 start.go:309] selected driver: docker
	I1202 16:18:27.204470  632702 start.go:927] validating driver "docker" against &{Name:newest-cni-682353 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-682353 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 16:18:27.204550  632702 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 16:18:27.205059  632702 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 16:18:27.272023  632702 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 16:18:27.259665242 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 16:18:27.272666  632702 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1202 16:18:27.272716  632702 cni.go:84] Creating CNI manager for ""
	I1202 16:18:27.272787  632702 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 16:18:27.272867  632702 start.go:353] cluster config:
	{Name:newest-cni-682353 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-682353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 16:18:27.277019  632702 out.go:179] * Starting "newest-cni-682353" primary control-plane node in "newest-cni-682353" cluster
	I1202 16:18:27.278537  632702 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 16:18:27.279898  632702 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 16:18:27.280976  632702 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 16:18:27.281047  632702 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 16:18:27.305155  632702 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 16:18:27.305175  632702 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 16:18:27.866989  632702 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	W1202 16:18:27.881613  632702 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1202 16:18:27.881752  632702 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/config.json ...
	I1202 16:18:27.881859  632702 cache.go:107] acquiring lock: {Name:mk821cef64e8468a2739d03d2e1019ac980bf2cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:18:27.881904  632702 cache.go:107] acquiring lock: {Name:mkce5d795e0ca01a9ee3d674d001cd6e04bbbfba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:18:27.881880  632702 cache.go:107] acquiring lock: {Name:mk3f4d40fdf359ce0573637a386f14c0a310cdc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:18:27.881933  632702 cache.go:107] acquiring lock: {Name:mkec45cdfdbdafc0ef1296b9d77662a50add1cdf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:18:27.881857  632702 cache.go:107] acquiring lock: {Name:mk6b8eeb5270fa67a5a87f892f37de1ae4805f75 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:18:27.881939  632702 cache.go:107] acquiring lock: {Name:mka2aa325920dfb2720f9036278856e8dac95446 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:18:27.881982  632702 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1202 16:18:27.881987  632702 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 16:18:27.882001  632702 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 138.196µs
	I1202 16:18:27.882001  632702 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 97.896µs
	I1202 16:18:27.882003  632702 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 16:18:27.882001  632702 cache.go:107] acquiring lock: {Name:mk91bc91bcc535b3edd8200bf0c06e4d97781487 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:18:27.882022  632702 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 16:18:27.882017  632702 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 180.698µs
	I1202 16:18:27.881986  632702 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 16:18:27.882024  632702 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 16:18:27.882019  632702 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 16:18:27.882034  632702 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 198.789µs
	I1202 16:18:27.882043  632702 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 16:18:27.881959  632702 cache.go:107] acquiring lock: {Name:mk17b77bf762047097cbe060b18dc85ae78a9727 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:18:27.882057  632702 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 16:18:27.882072  632702 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 16:18:27.882079  632702 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 139.818µs
	I1202 16:18:27.882085  632702 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 16:18:27.882080  632702 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 146.002µs
	I1202 16:18:27.882040  632702 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 116.393µs
	I1202 16:18:27.882094  632702 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 16:18:27.882026  632702 cache.go:243] Successfully downloaded all kic artifacts
	I1202 16:18:27.882062  632702 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 16:18:27.882095  632702 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 16:18:27.882108  632702 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 112.685µs
	I1202 16:18:27.882116  632702 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 16:18:27.882030  632702 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 16:18:27.882127  632702 cache.go:87] Successfully saved all images to host disk.
	I1202 16:18:27.882132  632702 start.go:360] acquireMachinesLock for newest-cni-682353: {Name:mkfed8f02380af59f92aa0b6f8ae02a29dbe0c8e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:18:27.882157  632702 start.go:364] duration metric: took 15.081µs to acquireMachinesLock for "newest-cni-682353"
	I1202 16:18:27.882174  632702 start.go:96] Skipping create...Using existing machine configuration
	I1202 16:18:27.882187  632702 fix.go:54] fixHost starting: 
	I1202 16:18:27.882409  632702 cli_runner.go:164] Run: docker container inspect newest-cni-682353 --format={{.State.Status}}
	I1202 16:18:27.899829  632702 fix.go:112] recreateIfNeeded on newest-cni-682353: state=Stopped err=<nil>
	W1202 16:18:27.899862  632702 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Dec 02 16:17:44 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:17:44.555534131Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 02 16:17:44 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:17:44.559349213Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 02 16:17:44 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:17:44.559371153Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 02 16:17:57 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:17:57.822710005Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b0f9dae5-487d-4642-a69e-a8dd1813c7ac name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:17:57 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:17:57.864150681Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=faca2670-b042-4159-b238-1a2acbe8ab2e name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:17:57 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:17:57.903382281Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8mpm/dashboard-metrics-scraper" id=40a9b451-2833-4755-a080-d6f7d6964cb1 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:17:57 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:17:57.903568324Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:17:57 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:17:57.910547076Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:17:57 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:17:57.911025668Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:17:57 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:17:57.942848115Z" level=info msg="Created container 74e42e3441d83bdf5357dbc187f4e11a36f79cdadc5dee7f69beb385997735ab: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8mpm/dashboard-metrics-scraper" id=40a9b451-2833-4755-a080-d6f7d6964cb1 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:17:57 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:17:57.943525377Z" level=info msg="Starting container: 74e42e3441d83bdf5357dbc187f4e11a36f79cdadc5dee7f69beb385997735ab" id=88b7a245-1b8f-452d-8a20-601ef689bd33 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 16:17:57 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:17:57.945802033Z" level=info msg="Started container" PID=1798 containerID=74e42e3441d83bdf5357dbc187f4e11a36f79cdadc5dee7f69beb385997735ab description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8mpm/dashboard-metrics-scraper id=88b7a245-1b8f-452d-8a20-601ef689bd33 name=/runtime.v1.RuntimeService/StartContainer sandboxID=76cb888ad6da939055fe61c5e1bdfc093ea4c45c8ebca043c39ca2eb40ad831e
	Dec 02 16:17:59 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:17:59.001972483Z" level=info msg="Removing container: 9e9ad99d8fce4dd52eca8dc0b4e0388360b1bdf09a84f5c51ca4a69fba742be2" id=5380310e-6cfb-44e1-a241-9f202783ae64 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 16:17:59 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:17:59.144298136Z" level=info msg="Removed container 9e9ad99d8fce4dd52eca8dc0b4e0388360b1bdf09a84f5c51ca4a69fba742be2: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8mpm/dashboard-metrics-scraper" id=5380310e-6cfb-44e1-a241-9f202783ae64 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 16:18:18 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:18:18.821667554Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=bb820137-0946-4cfa-8ef1-38fffb4c268e name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:18:18 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:18:18.822865747Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=4fe33f7d-5e30-44f9-a190-b8ea2e465713 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:18:18 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:18:18.824118065Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8mpm/dashboard-metrics-scraper" id=b230362f-351c-4bac-b701-75a5be979a8c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:18:18 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:18:18.824262693Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:18:18 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:18:18.831111258Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:18:18 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:18:18.831635114Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:18:18 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:18:18.861076534Z" level=info msg="Created container 81c9fef3a0179dc6cad2067a5c2d11cd35b328967b80abf2fdd9b8c439e0cff2: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8mpm/dashboard-metrics-scraper" id=b230362f-351c-4bac-b701-75a5be979a8c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:18:18 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:18:18.861824938Z" level=info msg="Starting container: 81c9fef3a0179dc6cad2067a5c2d11cd35b328967b80abf2fdd9b8c439e0cff2" id=d9f1d971-445c-4b0a-91a3-0658955bb112 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 16:18:18 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:18:18.864372766Z" level=info msg="Started container" PID=1834 containerID=81c9fef3a0179dc6cad2067a5c2d11cd35b328967b80abf2fdd9b8c439e0cff2 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8mpm/dashboard-metrics-scraper id=d9f1d971-445c-4b0a-91a3-0658955bb112 name=/runtime.v1.RuntimeService/StartContainer sandboxID=76cb888ad6da939055fe61c5e1bdfc093ea4c45c8ebca043c39ca2eb40ad831e
	Dec 02 16:18:19 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:18:19.038974996Z" level=info msg="Removing container: 74e42e3441d83bdf5357dbc187f4e11a36f79cdadc5dee7f69beb385997735ab" id=13a4339a-6741-44c6-b792-bdeb30728e01 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 16:18:19 default-k8s-diff-port-806420 crio[571]: time="2025-12-02T16:18:19.048977677Z" level=info msg="Removed container 74e42e3441d83bdf5357dbc187f4e11a36f79cdadc5dee7f69beb385997735ab: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8mpm/dashboard-metrics-scraper" id=13a4339a-6741-44c6-b792-bdeb30728e01 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	81c9fef3a0179       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           11 seconds ago      Exited              dashboard-metrics-scraper   3                   76cb888ad6da9       dashboard-metrics-scraper-6ffb444bf9-n8mpm             kubernetes-dashboard
	f87dd838bd5e6       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   46 seconds ago      Running             kubernetes-dashboard        0                   f52e93bbc5b6a       kubernetes-dashboard-855c9754f9-q97zr                  kubernetes-dashboard
	01ff7a0935c00       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Running             storage-provisioner         1                   56825b27afd3b       storage-provisioner                                    kube-system
	cc0af4f512184       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           55 seconds ago      Running             coredns                     0                   9b047f81a7dae       coredns-66bc5c9577-6h6nr                               kube-system
	ef8610895132d       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   ffc9afd85d8fd       busybox                                                default
	91fdfa7aaf4dd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago      Exited              storage-provisioner         0                   56825b27afd3b       storage-provisioner                                    kube-system
	97371be307c60       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           56 seconds ago      Running             kube-proxy                  0                   b6a21cb626e74       kube-proxy-574km                                       kube-system
	70eac3d962ea7       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           56 seconds ago      Running             kindnet-cni                 0                   13eacbfbea256       kindnet-pc8st                                          kube-system
	dd7adc25ca0d8       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           59 seconds ago      Running             kube-controller-manager     0                   d2a6b3701e89a       kube-controller-manager-default-k8s-diff-port-806420   kube-system
	85a4f9f063a68       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           59 seconds ago      Running             etcd                        0                   99171badb4618       etcd-default-k8s-diff-port-806420                      kube-system
	fa204ce25b4b7       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           59 seconds ago      Running             kube-scheduler              0                   3c47c0674e5a0       kube-scheduler-default-k8s-diff-port-806420            kube-system
	e986fe28a3e21       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           59 seconds ago      Running             kube-apiserver              0                   ba2566e6ac030       kube-apiserver-default-k8s-diff-port-806420            kube-system
	
	
	==> coredns [cc0af4f512184e70ee469ee170f2fb1000845f2a1d89dd0df1b818272e15e846] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54493 - 52971 "HINFO IN 4338479738599001029.4103854806184096063. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.023431567s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-806420
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-806420
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=default-k8s-diff-port-806420
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T16_16_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 16:16:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-806420
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 16:18:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 16:18:03 +0000   Tue, 02 Dec 2025 16:16:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 16:18:03 +0000   Tue, 02 Dec 2025 16:16:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 16:18:03 +0000   Tue, 02 Dec 2025 16:16:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 16:18:03 +0000   Tue, 02 Dec 2025 16:16:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-806420
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                48c4c192-0280-419c-8cb9-032c0b3b12b9
	  Boot ID:                    e00bac56-b076-4861-bc22-5d3b11269f73
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-66bc5c9577-6h6nr                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     111s
	  kube-system                 etcd-default-k8s-diff-port-806420                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         116s
	  kube-system                 kindnet-pc8st                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-default-k8s-diff-port-806420             250m (3%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-806420    200m (2%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-574km                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-default-k8s-diff-port-806420             100m (1%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-n8mpm              0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-q97zr                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 110s               kube-proxy       
	  Normal  Starting                 55s                kube-proxy       
	  Normal  Starting                 117s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  116s               kubelet          Node default-k8s-diff-port-806420 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s               kubelet          Node default-k8s-diff-port-806420 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s               kubelet          Node default-k8s-diff-port-806420 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           112s               node-controller  Node default-k8s-diff-port-806420 event: Registered Node default-k8s-diff-port-806420 in Controller
	  Normal  NodeReady                100s               kubelet          Node default-k8s-diff-port-806420 status is now: NodeReady
	  Normal  Starting                 61s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  61s (x8 over 61s)  kubelet          Node default-k8s-diff-port-806420 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x8 over 61s)  kubelet          Node default-k8s-diff-port-806420 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x8 over 61s)  kubelet          Node default-k8s-diff-port-806420 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           54s                node-controller  Node default-k8s-diff-port-806420 event: Registered Node default-k8s-diff-port-806420 in Controller
	
	
	==> dmesg <==
	[  +0.000023] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[Dec 2 16:14] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ca bc 15 8e 4f 39 08 06
	[  +0.202375] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4a 25 86 21 45 76 08 06
	[  +7.441346] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 50 97 74 77 f9 08 06
	[  +0.000311] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 8c 8a 4d de f7 08 06
	[Dec 2 16:15] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 87 56 d2 46 1b 08 06
	[  +0.000909] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4a 25 86 21 45 76 08 06
	[  +7.449328] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a 06 ef 04 0a 22 08 06
	[ +17.731920] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff ae 8e 5c 48 83 60 08 06
	[  +2.165442] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0e 0b db fb 54 af 08 06
	[  +0.000320] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 3a 06 ef 04 0a 22 08 06
	[ +14.651928] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 5d 2d 15 78 ec 08 06
	[  +0.000385] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 8e 5c 48 83 60 08 06
	
	
	==> etcd [85a4f9f063a689e0c01b71338ce33ac27c1c4ef5a601031762f5f6f8468c7949] <==
	{"level":"warn","ts":"2025-12-02T16:17:31.963407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:31.978948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:31.984846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:31.995024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:32.005230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:32.016318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:32.027599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:32.038023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:32.047664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:32.057094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:32.065262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:32.080035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:32.100145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:32.111589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:32.122916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:32.133364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:32.141970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:32.150946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:32.170528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:32.174596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:32.183991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:17:32.270815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57378","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-02T16:17:59.144188Z","caller":"traceutil/trace.go:172","msg":"trace[1666271433] transaction","detail":"{read_only:false; response_revision:622; number_of_response:1; }","duration":"115.468996ms","start":"2025-12-02T16:17:59.028705Z","end":"2025-12-02T16:17:59.144174Z","steps":["trace[1666271433] 'process raft request'  (duration: 115.433668ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T16:17:59.144181Z","caller":"traceutil/trace.go:172","msg":"trace[1987990892] transaction","detail":"{read_only:false; response_revision:620; number_of_response:1; }","duration":"115.945489ms","start":"2025-12-02T16:17:59.028209Z","end":"2025-12-02T16:17:59.144154Z","steps":["trace[1987990892] 'process raft request'  (duration: 37.558851ms)","trace[1987990892] 'compare'  (duration: 78.21982ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-02T16:17:59.144256Z","caller":"traceutil/trace.go:172","msg":"trace[2034242801] transaction","detail":"{read_only:false; response_revision:621; number_of_response:1; }","duration":"115.979781ms","start":"2025-12-02T16:17:59.028254Z","end":"2025-12-02T16:17:59.144233Z","steps":["trace[2034242801] 'process raft request'  (duration: 115.84364ms)"],"step_count":1}
	
	
	==> kernel <==
	 16:18:30 up  3:00,  0 user,  load average: 3.65, 4.01, 2.73
	Linux default-k8s-diff-port-806420 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [70eac3d962ea7a373f8ad31de48465816e6e12bf1ea039d59bc2f11a8500f8d1] <==
	I1202 16:17:34.332049       1 main.go:148] setting mtu 1500 for CNI 
	I1202 16:17:34.332064       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 16:17:34.332084       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T16:17:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 16:17:34.534167       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 16:17:34.534502       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 16:17:34.534544       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 16:17:34.534744       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1202 16:17:34.535217       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1202 16:17:34.535251       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1202 16:17:34.535281       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1202 16:17:34.535369       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1202 16:17:36.135130       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 16:17:36.135161       1 metrics.go:72] Registering metrics
	I1202 16:17:36.135213       1 controller.go:711] "Syncing nftables rules"
	I1202 16:17:44.534629       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1202 16:17:44.534701       1 main.go:301] handling current node
	I1202 16:17:54.537683       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1202 16:17:54.537724       1 main.go:301] handling current node
	I1202 16:18:04.534045       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1202 16:18:04.534085       1 main.go:301] handling current node
	I1202 16:18:14.535529       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1202 16:18:14.535561       1 main.go:301] handling current node
	I1202 16:18:24.534601       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1202 16:18:24.534662       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e986fe28a3e21e60cd56299b5d31eb8159c847908a86b5e9049cff20903959aa] <==
	I1202 16:17:32.956145       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1202 16:17:32.956231       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1202 16:17:32.956356       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1202 16:17:32.958877       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1202 16:17:32.958902       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1202 16:17:32.958922       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1202 16:17:32.959162       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1202 16:17:32.965462       1 aggregator.go:171] initial CRD sync complete...
	I1202 16:17:32.965924       1 autoregister_controller.go:144] Starting autoregister controller
	I1202 16:17:32.965985       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1202 16:17:32.966015       1 cache.go:39] Caches are synced for autoregister controller
	I1202 16:17:32.970678       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1202 16:17:32.989701       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 16:17:33.006323       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 16:17:33.402308       1 controller.go:667] quota admission added evaluator for: namespaces
	I1202 16:17:33.434546       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1202 16:17:33.458143       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 16:17:33.467689       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 16:17:33.473899       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 16:17:33.508765       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.144.140"}
	I1202 16:17:33.520208       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.95.32"}
	I1202 16:17:33.856506       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1202 16:17:36.712636       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1202 16:17:36.762164       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 16:17:36.859959       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [dd7adc25ca0d8fd13c03d582eb1846e44e7ca31363dd13737dfcd8541ae71f4a] <==
	I1202 16:17:36.297463       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1202 16:17:36.307156       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1202 16:17:36.307256       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1202 16:17:36.307270       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1202 16:17:36.307365       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1202 16:17:36.307392       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1202 16:17:36.307555       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-806420"
	I1202 16:17:36.307631       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1202 16:17:36.307753       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1202 16:17:36.307752       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1202 16:17:36.307853       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1202 16:17:36.308022       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1202 16:17:36.308046       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1202 16:17:36.308272       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1202 16:17:36.308683       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1202 16:17:36.309648       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1202 16:17:36.309664       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1202 16:17:36.311919       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1202 16:17:36.313194       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 16:17:36.313750       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1202 16:17:36.315903       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1202 16:17:36.318135       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1202 16:17:36.321404       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1202 16:17:36.323607       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1202 16:17:36.330831       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [97371be307c6073ea64886ffd7ed0b82e3b043b73ecbb945f6562db301c6048c] <==
	I1202 16:17:34.216933       1 server_linux.go:53] "Using iptables proxy"
	I1202 16:17:34.285270       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 16:17:34.385411       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 16:17:34.385493       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1202 16:17:34.385623       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 16:17:34.403952       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 16:17:34.404015       1 server_linux.go:132] "Using iptables Proxier"
	I1202 16:17:34.409083       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 16:17:34.410216       1 server.go:527] "Version info" version="v1.34.2"
	I1202 16:17:34.410299       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 16:17:34.412399       1 config.go:309] "Starting node config controller"
	I1202 16:17:34.412465       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 16:17:34.412503       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 16:17:34.412550       1 config.go:200] "Starting service config controller"
	I1202 16:17:34.412567       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 16:17:34.412561       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 16:17:34.412593       1 config.go:106] "Starting endpoint slice config controller"
	I1202 16:17:34.412645       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 16:17:34.412530       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 16:17:34.513604       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1202 16:17:34.513626       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 16:17:34.513679       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [fa204ce25b4b750a274bec528d833933338cbebe536dd59bd13e8ef6cec0cb00] <==
	I1202 16:17:31.773982       1 serving.go:386] Generated self-signed cert in-memory
	W1202 16:17:32.889746       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1202 16:17:32.889787       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1202 16:17:32.889800       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1202 16:17:32.889819       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1202 16:17:32.944848       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1202 16:17:32.946220       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 16:17:32.952678       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1202 16:17:32.952780       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 16:17:32.955267       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 16:17:32.952804       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1202 16:17:33.055569       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 02 16:17:40 default-k8s-diff-port-806420 kubelet[738]: I1202 16:17:40.917272     738 scope.go:117] "RemoveContainer" containerID="4bd951adc946de032954a0348377441c62c4ff165d97b8e3100bf799a39d0a6c"
	Dec 02 16:17:40 default-k8s-diff-port-806420 kubelet[738]: I1202 16:17:40.917497     738 scope.go:117] "RemoveContainer" containerID="9e9ad99d8fce4dd52eca8dc0b4e0388360b1bdf09a84f5c51ca4a69fba742be2"
	Dec 02 16:17:40 default-k8s-diff-port-806420 kubelet[738]: E1202 16:17:40.917765     738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-n8mpm_kubernetes-dashboard(fa137698-f778-48e5-b744-3584b36e2f95)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8mpm" podUID="fa137698-f778-48e5-b744-3584b36e2f95"
	Dec 02 16:17:41 default-k8s-diff-port-806420 kubelet[738]: I1202 16:17:41.398241     738 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 02 16:17:41 default-k8s-diff-port-806420 kubelet[738]: I1202 16:17:41.921652     738 scope.go:117] "RemoveContainer" containerID="9e9ad99d8fce4dd52eca8dc0b4e0388360b1bdf09a84f5c51ca4a69fba742be2"
	Dec 02 16:17:41 default-k8s-diff-port-806420 kubelet[738]: E1202 16:17:41.921848     738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-n8mpm_kubernetes-dashboard(fa137698-f778-48e5-b744-3584b36e2f95)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8mpm" podUID="fa137698-f778-48e5-b744-3584b36e2f95"
	Dec 02 16:17:43 default-k8s-diff-port-806420 kubelet[738]: I1202 16:17:43.939714     738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-q97zr" podStartSLOduration=1.730232567 podStartE2EDuration="7.939688428s" podCreationTimestamp="2025-12-02 16:17:36 +0000 UTC" firstStartedPulling="2025-12-02 16:17:37.161858748 +0000 UTC m=+7.441287980" lastFinishedPulling="2025-12-02 16:17:43.371314623 +0000 UTC m=+13.650743841" observedRunningTime="2025-12-02 16:17:43.939142426 +0000 UTC m=+14.218571666" watchObservedRunningTime="2025-12-02 16:17:43.939688428 +0000 UTC m=+14.219117665"
	Dec 02 16:17:44 default-k8s-diff-port-806420 kubelet[738]: I1202 16:17:44.214124     738 scope.go:117] "RemoveContainer" containerID="9e9ad99d8fce4dd52eca8dc0b4e0388360b1bdf09a84f5c51ca4a69fba742be2"
	Dec 02 16:17:44 default-k8s-diff-port-806420 kubelet[738]: E1202 16:17:44.214372     738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-n8mpm_kubernetes-dashboard(fa137698-f778-48e5-b744-3584b36e2f95)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8mpm" podUID="fa137698-f778-48e5-b744-3584b36e2f95"
	Dec 02 16:17:57 default-k8s-diff-port-806420 kubelet[738]: I1202 16:17:57.821502     738 scope.go:117] "RemoveContainer" containerID="9e9ad99d8fce4dd52eca8dc0b4e0388360b1bdf09a84f5c51ca4a69fba742be2"
	Dec 02 16:17:58 default-k8s-diff-port-806420 kubelet[738]: I1202 16:17:58.979270     738 scope.go:117] "RemoveContainer" containerID="9e9ad99d8fce4dd52eca8dc0b4e0388360b1bdf09a84f5c51ca4a69fba742be2"
	Dec 02 16:17:58 default-k8s-diff-port-806420 kubelet[738]: I1202 16:17:58.979494     738 scope.go:117] "RemoveContainer" containerID="74e42e3441d83bdf5357dbc187f4e11a36f79cdadc5dee7f69beb385997735ab"
	Dec 02 16:17:58 default-k8s-diff-port-806420 kubelet[738]: E1202 16:17:58.979724     738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-n8mpm_kubernetes-dashboard(fa137698-f778-48e5-b744-3584b36e2f95)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8mpm" podUID="fa137698-f778-48e5-b744-3584b36e2f95"
	Dec 02 16:18:04 default-k8s-diff-port-806420 kubelet[738]: I1202 16:18:04.214547     738 scope.go:117] "RemoveContainer" containerID="74e42e3441d83bdf5357dbc187f4e11a36f79cdadc5dee7f69beb385997735ab"
	Dec 02 16:18:04 default-k8s-diff-port-806420 kubelet[738]: E1202 16:18:04.214829     738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-n8mpm_kubernetes-dashboard(fa137698-f778-48e5-b744-3584b36e2f95)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8mpm" podUID="fa137698-f778-48e5-b744-3584b36e2f95"
	Dec 02 16:18:18 default-k8s-diff-port-806420 kubelet[738]: I1202 16:18:18.821009     738 scope.go:117] "RemoveContainer" containerID="74e42e3441d83bdf5357dbc187f4e11a36f79cdadc5dee7f69beb385997735ab"
	Dec 02 16:18:19 default-k8s-diff-port-806420 kubelet[738]: I1202 16:18:19.037682     738 scope.go:117] "RemoveContainer" containerID="74e42e3441d83bdf5357dbc187f4e11a36f79cdadc5dee7f69beb385997735ab"
	Dec 02 16:18:19 default-k8s-diff-port-806420 kubelet[738]: I1202 16:18:19.037948     738 scope.go:117] "RemoveContainer" containerID="81c9fef3a0179dc6cad2067a5c2d11cd35b328967b80abf2fdd9b8c439e0cff2"
	Dec 02 16:18:19 default-k8s-diff-port-806420 kubelet[738]: E1202 16:18:19.038155     738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-n8mpm_kubernetes-dashboard(fa137698-f778-48e5-b744-3584b36e2f95)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8mpm" podUID="fa137698-f778-48e5-b744-3584b36e2f95"
	Dec 02 16:18:24 default-k8s-diff-port-806420 kubelet[738]: I1202 16:18:24.215103     738 scope.go:117] "RemoveContainer" containerID="81c9fef3a0179dc6cad2067a5c2d11cd35b328967b80abf2fdd9b8c439e0cff2"
	Dec 02 16:18:24 default-k8s-diff-port-806420 kubelet[738]: E1202 16:18:24.215333     738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-n8mpm_kubernetes-dashboard(fa137698-f778-48e5-b744-3584b36e2f95)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8mpm" podUID="fa137698-f778-48e5-b744-3584b36e2f95"
	Dec 02 16:18:25 default-k8s-diff-port-806420 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 02 16:18:25 default-k8s-diff-port-806420 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 02 16:18:25 default-k8s-diff-port-806420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 16:18:25 default-k8s-diff-port-806420 systemd[1]: kubelet.service: Consumed 1.858s CPU time.
	
	
	==> kubernetes-dashboard [f87dd838bd5e6c4c94fe2d031797c0e5265616b837be2d71816a40b69471ead9] <==
	2025/12/02 16:17:43 Using namespace: kubernetes-dashboard
	2025/12/02 16:17:43 Using in-cluster config to connect to apiserver
	2025/12/02 16:17:43 Using secret token for csrf signing
	2025/12/02 16:17:43 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/02 16:17:43 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/02 16:17:43 Successful initial request to the apiserver, version: v1.34.2
	2025/12/02 16:17:43 Generating JWE encryption key
	2025/12/02 16:17:43 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/02 16:17:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/02 16:17:43 Initializing JWE encryption key from synchronized object
	2025/12/02 16:17:43 Creating in-cluster Sidecar client
	2025/12/02 16:17:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/02 16:17:43 Serving insecurely on HTTP port: 9090
	2025/12/02 16:18:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/02 16:17:43 Starting overwatch
	
	
	==> storage-provisioner [01ff7a0935c00a17773bcff619702fa65a201219319ebec0982ffe9c505a8069] <==
	W1202 16:18:06.424315       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:08.427150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:08.434148       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:10.437086       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:10.441296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:12.444215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:12.450165       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:14.453563       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:14.459059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:16.463515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:16.468401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:18.472411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:18.477753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:20.481515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:20.485707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:22.491397       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:22.496396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:24.499213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:24.503040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:26.506832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:26.510604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:28.513590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:28.518641       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:30.521713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 16:18:30.526287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [91fdfa7aaf4dd7592031b0da30d9eef5afa4036cab29173c60bf23797dbfd1e5] <==
	I1202 16:17:34.184746       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1202 16:17:34.189048       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-806420 -n default-k8s-diff-port-806420
E1202 16:18:30.836941  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/auto-589300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-806420 -n default-k8s-diff-port-806420: exit status 2 (333.157945ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-806420 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (5.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-682353 --alsologtostderr -v=1
E1202 16:18:39.082552  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/kindnet-589300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:18:39.088945  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/kindnet-589300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:18:39.100482  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/kindnet-589300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:18:39.122144  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/kindnet-589300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:18:39.163581  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/kindnet-589300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:18:39.245545  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/kindnet-589300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:18:39.407192  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/kindnet-589300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:18:39.729127  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/kindnet-589300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:18:40.371322  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/kindnet-589300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-682353 --alsologtostderr -v=1: exit status 80 (1.733088984s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-682353 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 16:18:38.893388  636409 out.go:360] Setting OutFile to fd 1 ...
	I1202 16:18:38.893707  636409 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:18:38.893721  636409 out.go:374] Setting ErrFile to fd 2...
	I1202 16:18:38.893727  636409 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:18:38.894042  636409 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 16:18:38.894282  636409 out.go:368] Setting JSON to false
	I1202 16:18:38.894302  636409 mustload.go:66] Loading cluster: newest-cni-682353
	I1202 16:18:38.894665  636409 config.go:182] Loaded profile config "newest-cni-682353": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 16:18:38.895046  636409 cli_runner.go:164] Run: docker container inspect newest-cni-682353 --format={{.State.Status}}
	I1202 16:18:38.913444  636409 host.go:66] Checking if "newest-cni-682353" exists ...
	I1202 16:18:38.913745  636409 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 16:18:38.974067  636409 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 16:18:38.963255387 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 16:18:38.974968  636409 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-682353 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1202 16:18:38.976954  636409 out.go:179] * Pausing node newest-cni-682353 ... 
	I1202 16:18:38.978047  636409 host.go:66] Checking if "newest-cni-682353" exists ...
	I1202 16:18:38.978321  636409 ssh_runner.go:195] Run: systemctl --version
	I1202 16:18:38.978366  636409 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:18:38.997113  636409 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33265 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/newest-cni-682353/id_rsa Username:docker}
	I1202 16:18:39.096186  636409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 16:18:39.108771  636409 pause.go:52] kubelet running: true
	I1202 16:18:39.108835  636409 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 16:18:39.240165  636409 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 16:18:39.240248  636409 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 16:18:39.306868  636409 cri.go:89] found id: "91fc3c64a25e3fda17bac57736962354faa703cde5738eaae25bd35fa4c465c3"
	I1202 16:18:39.306894  636409 cri.go:89] found id: "4e5a1f21c78595fd7dc11cf79c1d7485a73ccb9864314205ffa58d85454752b5"
	I1202 16:18:39.306899  636409 cri.go:89] found id: "cb312f13091c58bd72d656a9744dbd05f9804fdf67c95588b994fb7a3c8a08b7"
	I1202 16:18:39.306902  636409 cri.go:89] found id: "c8f017ed73870ab02759b08f235ff372e1d39e18f2cba24a7dc958208be38f45"
	I1202 16:18:39.306905  636409 cri.go:89] found id: "637f7511012f268dc11abb2bdb14e8541a010a8282803345662aee9434c58f91"
	I1202 16:18:39.306908  636409 cri.go:89] found id: "0d367e0e69f0e7e85292b0ba7c75a0d708dac3e3ee3b2f01dc0c4ea1736b98fc"
	I1202 16:18:39.306911  636409 cri.go:89] found id: ""
	I1202 16:18:39.306948  636409 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 16:18:39.318822  636409 retry.go:31] will retry after 301.669092ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T16:18:39Z" level=error msg="open /run/runc: no such file or directory"
	I1202 16:18:39.621374  636409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 16:18:39.634749  636409 pause.go:52] kubelet running: false
	I1202 16:18:39.634796  636409 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 16:18:39.748581  636409 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 16:18:39.748692  636409 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 16:18:39.827474  636409 cri.go:89] found id: "91fc3c64a25e3fda17bac57736962354faa703cde5738eaae25bd35fa4c465c3"
	I1202 16:18:39.827494  636409 cri.go:89] found id: "4e5a1f21c78595fd7dc11cf79c1d7485a73ccb9864314205ffa58d85454752b5"
	I1202 16:18:39.827499  636409 cri.go:89] found id: "cb312f13091c58bd72d656a9744dbd05f9804fdf67c95588b994fb7a3c8a08b7"
	I1202 16:18:39.827502  636409 cri.go:89] found id: "c8f017ed73870ab02759b08f235ff372e1d39e18f2cba24a7dc958208be38f45"
	I1202 16:18:39.827505  636409 cri.go:89] found id: "637f7511012f268dc11abb2bdb14e8541a010a8282803345662aee9434c58f91"
	I1202 16:18:39.827509  636409 cri.go:89] found id: "0d367e0e69f0e7e85292b0ba7c75a0d708dac3e3ee3b2f01dc0c4ea1736b98fc"
	I1202 16:18:39.827512  636409 cri.go:89] found id: ""
	I1202 16:18:39.827549  636409 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 16:18:39.838953  636409 retry.go:31] will retry after 503.226944ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T16:18:39Z" level=error msg="open /run/runc: no such file or directory"
	I1202 16:18:40.342643  636409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 16:18:40.355982  636409 pause.go:52] kubelet running: false
	I1202 16:18:40.356039  636409 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 16:18:40.475573  636409 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 16:18:40.475699  636409 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 16:18:40.542252  636409 cri.go:89] found id: "91fc3c64a25e3fda17bac57736962354faa703cde5738eaae25bd35fa4c465c3"
	I1202 16:18:40.542280  636409 cri.go:89] found id: "4e5a1f21c78595fd7dc11cf79c1d7485a73ccb9864314205ffa58d85454752b5"
	I1202 16:18:40.542286  636409 cri.go:89] found id: "cb312f13091c58bd72d656a9744dbd05f9804fdf67c95588b994fb7a3c8a08b7"
	I1202 16:18:40.542291  636409 cri.go:89] found id: "c8f017ed73870ab02759b08f235ff372e1d39e18f2cba24a7dc958208be38f45"
	I1202 16:18:40.542296  636409 cri.go:89] found id: "637f7511012f268dc11abb2bdb14e8541a010a8282803345662aee9434c58f91"
	I1202 16:18:40.542301  636409 cri.go:89] found id: "0d367e0e69f0e7e85292b0ba7c75a0d708dac3e3ee3b2f01dc0c4ea1736b98fc"
	I1202 16:18:40.542306  636409 cri.go:89] found id: ""
	I1202 16:18:40.542354  636409 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 16:18:40.556176  636409 out.go:203] 
	W1202 16:18:40.557476  636409 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T16:18:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T16:18:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 16:18:40.557509  636409 out.go:285] * 
	* 
	W1202 16:18:40.561702  636409 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 16:18:40.563196  636409 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-682353 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-682353
helpers_test.go:243: (dbg) docker inspect newest-cni-682353:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a775ae5be075602a44fc7a11cf75bd1c7c4a445ea81c87d52e44f9d45bffd188",
	        "Created": "2025-12-02T16:17:54.498495762Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 633179,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T16:18:27.930331363Z",
	            "FinishedAt": "2025-12-02T16:18:26.427980635Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/a775ae5be075602a44fc7a11cf75bd1c7c4a445ea81c87d52e44f9d45bffd188/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a775ae5be075602a44fc7a11cf75bd1c7c4a445ea81c87d52e44f9d45bffd188/hostname",
	        "HostsPath": "/var/lib/docker/containers/a775ae5be075602a44fc7a11cf75bd1c7c4a445ea81c87d52e44f9d45bffd188/hosts",
	        "LogPath": "/var/lib/docker/containers/a775ae5be075602a44fc7a11cf75bd1c7c4a445ea81c87d52e44f9d45bffd188/a775ae5be075602a44fc7a11cf75bd1c7c4a445ea81c87d52e44f9d45bffd188-json.log",
	        "Name": "/newest-cni-682353",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-682353:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-682353",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a775ae5be075602a44fc7a11cf75bd1c7c4a445ea81c87d52e44f9d45bffd188",
	                "LowerDir": "/var/lib/docker/overlay2/1f044ac655221173ad21f42de851de89b4294bf00ed7588a758e1c216c20f865-init/diff:/var/lib/docker/overlay2/ab98578cee54140c21ba2edb7c02601b9799fbaa027f05ce4daaae66d198c082/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1f044ac655221173ad21f42de851de89b4294bf00ed7588a758e1c216c20f865/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1f044ac655221173ad21f42de851de89b4294bf00ed7588a758e1c216c20f865/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1f044ac655221173ad21f42de851de89b4294bf00ed7588a758e1c216c20f865/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-682353",
	                "Source": "/var/lib/docker/volumes/newest-cni-682353/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-682353",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-682353",
	                "name.minikube.sigs.k8s.io": "newest-cni-682353",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "5c996ddee384c86b803e9da8fb4c2828816a3945d28d26f546a9884da940103e",
	            "SandboxKey": "/var/run/docker/netns/5c996ddee384",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33265"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33266"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33269"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33267"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33268"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-682353": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3ac149f6cf88728cc866ee4dd469920e42598af4e720f482a4a4ddfe77f5ff8f",
	                    "EndpointID": "16af9bb749947478b502ce2264f378996e2c4a55af0b7e078bb8219dd485c9b5",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "22:d2:b1:88:0f:d3",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-682353",
	                        "a775ae5be075"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-682353 -n newest-cni-682353
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-682353 -n newest-cni-682353: exit status 2 (337.979377ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-682353 logs -n 25
E1202 16:18:41.653396  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/kindnet-589300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable dashboard -p default-k8s-diff-port-806420 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ start   │ -p default-k8s-diff-port-806420 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:18 UTC │
	│ image   │ old-k8s-version-380588 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ pause   │ -p old-k8s-version-380588 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │                     │
	│ image   │ no-preload-534842 image list --format=json                                                                                                                                                                                                           │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ pause   │ -p no-preload-534842 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │                     │
	│ delete  │ -p old-k8s-version-380588                                                                                                                                                                                                                            │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ delete  │ -p no-preload-534842                                                                                                                                                                                                                                 │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ delete  │ -p old-k8s-version-380588                                                                                                                                                                                                                            │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ start   │ -p newest-cni-682353 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-682353            │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:18 UTC │
	│ delete  │ -p no-preload-534842                                                                                                                                                                                                                                 │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ image   │ embed-certs-046271 image list --format=json                                                                                                                                                                                                          │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │ 02 Dec 25 16:18 UTC │
	│ pause   │ -p embed-certs-046271 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │                     │
	│ delete  │ -p embed-certs-046271                                                                                                                                                                                                                                │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │ 02 Dec 25 16:18 UTC │
	│ addons  │ enable metrics-server -p newest-cni-682353 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-682353            │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │                     │
	│ delete  │ -p embed-certs-046271                                                                                                                                                                                                                                │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │ 02 Dec 25 16:18 UTC │
	│ stop    │ -p newest-cni-682353 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-682353            │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │ 02 Dec 25 16:18 UTC │
	│ image   │ default-k8s-diff-port-806420 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │ 02 Dec 25 16:18 UTC │
	│ pause   │ -p default-k8s-diff-port-806420 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-682353 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-682353            │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │ 02 Dec 25 16:18 UTC │
	│ start   │ -p newest-cni-682353 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-682353            │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │ 02 Dec 25 16:18 UTC │
	│ delete  │ -p default-k8s-diff-port-806420                                                                                                                                                                                                                      │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │ 02 Dec 25 16:18 UTC │
	│ delete  │ -p default-k8s-diff-port-806420                                                                                                                                                                                                                      │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │ 02 Dec 25 16:18 UTC │
	│ image   │ newest-cni-682353 image list --format=json                                                                                                                                                                                                           │ newest-cni-682353            │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │ 02 Dec 25 16:18 UTC │
	│ pause   │ -p newest-cni-682353 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-682353            │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 16:18:27
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 16:18:27.096789  632702 out.go:360] Setting OutFile to fd 1 ...
	I1202 16:18:27.096907  632702 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:18:27.096916  632702 out.go:374] Setting ErrFile to fd 2...
	I1202 16:18:27.096920  632702 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:18:27.097170  632702 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 16:18:27.097655  632702 out.go:368] Setting JSON to false
	I1202 16:18:27.098723  632702 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":10848,"bootTime":1764681459,"procs":263,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 16:18:27.098783  632702 start.go:143] virtualization: kvm guest
	I1202 16:18:27.100925  632702 out.go:179] * [newest-cni-682353] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 16:18:27.102297  632702 notify.go:221] Checking for updates...
	I1202 16:18:27.102310  632702 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 16:18:27.103490  632702 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 16:18:27.104790  632702 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 16:18:27.105915  632702 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-264555/.minikube
	I1202 16:18:27.106974  632702 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 16:18:27.108111  632702 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 16:18:27.109658  632702 config.go:182] Loaded profile config "newest-cni-682353": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 16:18:27.110228  632702 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 16:18:27.133604  632702 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 16:18:27.133775  632702 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 16:18:27.200970  632702 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 16:18:27.188090505 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 16:18:27.201072  632702 docker.go:319] overlay module found
	I1202 16:18:27.203028  632702 out.go:179] * Using the docker driver based on existing profile
	I1202 16:18:27.204448  632702 start.go:309] selected driver: docker
	I1202 16:18:27.204470  632702 start.go:927] validating driver "docker" against &{Name:newest-cni-682353 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-682353 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 16:18:27.204550  632702 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 16:18:27.205059  632702 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 16:18:27.272023  632702 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 16:18:27.259665242 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 16:18:27.272666  632702 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1202 16:18:27.272716  632702 cni.go:84] Creating CNI manager for ""
	I1202 16:18:27.272787  632702 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 16:18:27.272867  632702 start.go:353] cluster config:
	{Name:newest-cni-682353 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-682353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 16:18:27.277019  632702 out.go:179] * Starting "newest-cni-682353" primary control-plane node in "newest-cni-682353" cluster
	I1202 16:18:27.278537  632702 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 16:18:27.279898  632702 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 16:18:27.280976  632702 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 16:18:27.281047  632702 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 16:18:27.305155  632702 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 16:18:27.305175  632702 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 16:18:27.866989  632702 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	W1202 16:18:27.881613  632702 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1202 16:18:27.881752  632702 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/config.json ...
	I1202 16:18:27.881859  632702 cache.go:107] acquiring lock: {Name:mk821cef64e8468a2739d03d2e1019ac980bf2cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:18:27.881904  632702 cache.go:107] acquiring lock: {Name:mkce5d795e0ca01a9ee3d674d001cd6e04bbbfba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:18:27.881880  632702 cache.go:107] acquiring lock: {Name:mk3f4d40fdf359ce0573637a386f14c0a310cdc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:18:27.881933  632702 cache.go:107] acquiring lock: {Name:mkec45cdfdbdafc0ef1296b9d77662a50add1cdf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:18:27.881857  632702 cache.go:107] acquiring lock: {Name:mk6b8eeb5270fa67a5a87f892f37de1ae4805f75 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:18:27.881939  632702 cache.go:107] acquiring lock: {Name:mka2aa325920dfb2720f9036278856e8dac95446 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:18:27.881982  632702 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1202 16:18:27.881987  632702 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 16:18:27.882001  632702 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 138.196µs
	I1202 16:18:27.882001  632702 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 97.896µs
	I1202 16:18:27.882003  632702 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 16:18:27.882001  632702 cache.go:107] acquiring lock: {Name:mk91bc91bcc535b3edd8200bf0c06e4d97781487 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:18:27.882022  632702 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 16:18:27.882017  632702 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 180.698µs
	I1202 16:18:27.881986  632702 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 16:18:27.882024  632702 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 16:18:27.882019  632702 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 16:18:27.882034  632702 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 198.789µs
	I1202 16:18:27.882043  632702 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 16:18:27.881959  632702 cache.go:107] acquiring lock: {Name:mk17b77bf762047097cbe060b18dc85ae78a9727 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:18:27.882057  632702 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 16:18:27.882072  632702 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 16:18:27.882079  632702 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 139.818µs
	I1202 16:18:27.882085  632702 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 16:18:27.882080  632702 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 146.002µs
	I1202 16:18:27.882040  632702 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 116.393µs
	I1202 16:18:27.882094  632702 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 16:18:27.882026  632702 cache.go:243] Successfully downloaded all kic artifacts
	I1202 16:18:27.882062  632702 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 16:18:27.882095  632702 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 16:18:27.882108  632702 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 112.685µs
	I1202 16:18:27.882116  632702 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 16:18:27.882030  632702 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 16:18:27.882127  632702 cache.go:87] Successfully saved all images to host disk.
	I1202 16:18:27.882132  632702 start.go:360] acquireMachinesLock for newest-cni-682353: {Name:mkfed8f02380af59f92aa0b6f8ae02a29dbe0c8e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:18:27.882157  632702 start.go:364] duration metric: took 15.081µs to acquireMachinesLock for "newest-cni-682353"
	I1202 16:18:27.882174  632702 start.go:96] Skipping create...Using existing machine configuration
	I1202 16:18:27.882187  632702 fix.go:54] fixHost starting: 
	I1202 16:18:27.882409  632702 cli_runner.go:164] Run: docker container inspect newest-cni-682353 --format={{.State.Status}}
	I1202 16:18:27.899829  632702 fix.go:112] recreateIfNeeded on newest-cni-682353: state=Stopped err=<nil>
	W1202 16:18:27.899862  632702 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 16:18:27.902536  632702 out.go:252] * Restarting existing docker container for "newest-cni-682353" ...
	I1202 16:18:27.902610  632702 cli_runner.go:164] Run: docker start newest-cni-682353
	I1202 16:18:28.167307  632702 cli_runner.go:164] Run: docker container inspect newest-cni-682353 --format={{.State.Status}}
	I1202 16:18:28.188833  632702 kic.go:430] container "newest-cni-682353" state is running.
	I1202 16:18:28.189309  632702 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-682353
	I1202 16:18:28.210140  632702 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/config.json ...
	I1202 16:18:28.210458  632702 machine.go:94] provisionDockerMachine start ...
	I1202 16:18:28.210557  632702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:18:28.230546  632702 main.go:143] libmachine: Using SSH client type: native
	I1202 16:18:28.230836  632702 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33265 <nil> <nil>}
	I1202 16:18:28.230849  632702 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 16:18:28.231650  632702 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33004->127.0.0.1:33265: read: connection reset by peer
	I1202 16:18:31.375476  632702 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-682353
	
	I1202 16:18:31.375503  632702 ubuntu.go:182] provisioning hostname "newest-cni-682353"
	I1202 16:18:31.375561  632702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:18:31.393858  632702 main.go:143] libmachine: Using SSH client type: native
	I1202 16:18:31.394130  632702 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33265 <nil> <nil>}
	I1202 16:18:31.394148  632702 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-682353 && echo "newest-cni-682353" | sudo tee /etc/hostname
	I1202 16:18:31.546381  632702 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-682353
	
	I1202 16:18:31.546520  632702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:18:31.565035  632702 main.go:143] libmachine: Using SSH client type: native
	I1202 16:18:31.565246  632702 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33265 <nil> <nil>}
	I1202 16:18:31.565262  632702 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-682353' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-682353/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-682353' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 16:18:31.704273  632702 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 16:18:31.704309  632702 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-264555/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-264555/.minikube}
	I1202 16:18:31.704334  632702 ubuntu.go:190] setting up certificates
	I1202 16:18:31.704350  632702 provision.go:84] configureAuth start
	I1202 16:18:31.704415  632702 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-682353
	I1202 16:18:31.722482  632702 provision.go:143] copyHostCerts
	I1202 16:18:31.722557  632702 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem, removing ...
	I1202 16:18:31.722574  632702 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem
	I1202 16:18:31.722658  632702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem (1082 bytes)
	I1202 16:18:31.722781  632702 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem, removing ...
	I1202 16:18:31.722793  632702 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem
	I1202 16:18:31.722840  632702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem (1123 bytes)
	I1202 16:18:31.722926  632702 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem, removing ...
	I1202 16:18:31.722935  632702 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem
	I1202 16:18:31.722981  632702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem (1675 bytes)
	I1202 16:18:31.723057  632702 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem org=jenkins.newest-cni-682353 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-682353]
	I1202 16:18:31.891698  632702 provision.go:177] copyRemoteCerts
	I1202 16:18:31.891783  632702 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 16:18:31.891835  632702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:18:31.909877  632702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33265 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/newest-cni-682353/id_rsa Username:docker}
	I1202 16:18:32.009102  632702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1202 16:18:32.026501  632702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 16:18:32.044121  632702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 16:18:32.061615  632702 provision.go:87] duration metric: took 357.250765ms to configureAuth
	I1202 16:18:32.061645  632702 ubuntu.go:206] setting minikube options for container-runtime
	I1202 16:18:32.061831  632702 config.go:182] Loaded profile config "newest-cni-682353": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 16:18:32.061950  632702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:18:32.080461  632702 main.go:143] libmachine: Using SSH client type: native
	I1202 16:18:32.080679  632702 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33265 <nil> <nil>}
	I1202 16:18:32.080695  632702 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 16:18:32.375082  632702 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 16:18:32.375109  632702 machine.go:97] duration metric: took 4.16462682s to provisionDockerMachine
	I1202 16:18:32.375121  632702 start.go:293] postStartSetup for "newest-cni-682353" (driver="docker")
	I1202 16:18:32.375134  632702 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 16:18:32.375197  632702 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 16:18:32.375236  632702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:18:32.393848  632702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33265 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/newest-cni-682353/id_rsa Username:docker}
	I1202 16:18:32.495325  632702 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 16:18:32.499380  632702 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 16:18:32.499409  632702 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 16:18:32.499443  632702 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-264555/.minikube/addons for local assets ...
	I1202 16:18:32.499503  632702 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-264555/.minikube/files for local assets ...
	I1202 16:18:32.499604  632702 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem -> 2680992.pem in /etc/ssl/certs
	I1202 16:18:32.499737  632702 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 16:18:32.508124  632702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem --> /etc/ssl/certs/2680992.pem (1708 bytes)
	I1202 16:18:32.527058  632702 start.go:296] duration metric: took 151.91868ms for postStartSetup
	I1202 16:18:32.527147  632702 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 16:18:32.527200  632702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:18:32.547309  632702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33265 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/newest-cni-682353/id_rsa Username:docker}
	I1202 16:18:32.646662  632702 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 16:18:32.651518  632702 fix.go:56] duration metric: took 4.769325714s for fixHost
	I1202 16:18:32.651541  632702 start.go:83] releasing machines lock for "newest-cni-682353", held for 4.769372545s
	I1202 16:18:32.651604  632702 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-682353
	I1202 16:18:32.670263  632702 ssh_runner.go:195] Run: cat /version.json
	I1202 16:18:32.670327  632702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:18:32.670341  632702 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 16:18:32.670448  632702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:18:32.690193  632702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33265 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/newest-cni-682353/id_rsa Username:docker}
	I1202 16:18:32.690321  632702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33265 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/newest-cni-682353/id_rsa Username:docker}
	I1202 16:18:32.843400  632702 ssh_runner.go:195] Run: systemctl --version
	I1202 16:18:32.849952  632702 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 16:18:32.889736  632702 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 16:18:32.895185  632702 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 16:18:32.895255  632702 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 16:18:32.906466  632702 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 16:18:32.906507  632702 start.go:496] detecting cgroup driver to use...
	I1202 16:18:32.906547  632702 detect.go:190] detected "systemd" cgroup driver on host os
	I1202 16:18:32.906704  632702 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 16:18:32.924596  632702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 16:18:32.940804  632702 docker.go:218] disabling cri-docker service (if available) ...
	I1202 16:18:32.940873  632702 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 16:18:32.960372  632702 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 16:18:32.977894  632702 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 16:18:33.070681  632702 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 16:18:33.155348  632702 docker.go:234] disabling docker service ...
	I1202 16:18:33.155430  632702 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 16:18:33.170699  632702 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 16:18:33.183267  632702 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 16:18:33.264888  632702 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 16:18:33.346350  632702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 16:18:33.358835  632702 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 16:18:33.373408  632702 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 16:18:33.373484  632702 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:18:33.382672  632702 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1202 16:18:33.382733  632702 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:18:33.392523  632702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:18:33.401478  632702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:18:33.410378  632702 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 16:18:33.418878  632702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:18:33.427684  632702 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:18:33.436322  632702 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:18:33.445202  632702 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 16:18:33.452752  632702 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 16:18:33.460417  632702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 16:18:33.539438  632702 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 16:18:33.695766  632702 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 16:18:33.695848  632702 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 16:18:33.701323  632702 start.go:564] Will wait 60s for crictl version
	I1202 16:18:33.701390  632702 ssh_runner.go:195] Run: which crictl
	I1202 16:18:33.705206  632702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 16:18:33.735601  632702 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 16:18:33.735697  632702 ssh_runner.go:195] Run: crio --version
	I1202 16:18:33.770986  632702 ssh_runner.go:195] Run: crio --version
	I1202 16:18:33.807856  632702 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1202 16:18:33.809040  632702 cli_runner.go:164] Run: docker network inspect newest-cni-682353 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 16:18:33.827561  632702 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1202 16:18:33.831887  632702 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 16:18:33.843369  632702 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1202 16:18:33.844353  632702 kubeadm.go:884] updating cluster {Name:newest-cni-682353 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-682353 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 16:18:33.844491  632702 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 16:18:33.844532  632702 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 16:18:33.876407  632702 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 16:18:33.876447  632702 cache_images.go:86] Images are preloaded, skipping loading
	I1202 16:18:33.876456  632702 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0-beta.0 crio true true} ...
	I1202 16:18:33.876576  632702 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-682353 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-682353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 16:18:33.876662  632702 ssh_runner.go:195] Run: crio config
	I1202 16:18:33.925827  632702 cni.go:84] Creating CNI manager for ""
	I1202 16:18:33.925845  632702 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 16:18:33.925860  632702 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1202 16:18:33.925882  632702 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-682353 NodeName:newest-cni-682353 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 16:18:33.926012  632702 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-682353"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 16:18:33.926070  632702 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 16:18:33.934985  632702 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 16:18:33.935068  632702 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 16:18:33.943624  632702 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1202 16:18:33.957253  632702 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 16:18:33.970750  632702 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1202 16:18:33.984335  632702 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1202 16:18:33.988303  632702 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 16:18:33.998545  632702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 16:18:34.079270  632702 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 16:18:34.100703  632702 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353 for IP: 192.168.103.2
	I1202 16:18:34.100731  632702 certs.go:195] generating shared ca certs ...
	I1202 16:18:34.100753  632702 certs.go:227] acquiring lock for ca certs: {Name:mk039ff27816ff98157f54038cc23b17e408fc34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:18:34.100947  632702 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-264555/.minikube/ca.key
	I1202 16:18:34.100993  632702 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.key
	I1202 16:18:34.101002  632702 certs.go:257] generating profile certs ...
	I1202 16:18:34.101098  632702 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/client.key
	I1202 16:18:34.101156  632702 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/apiserver.key.5833a0e0
	I1202 16:18:34.101190  632702 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/proxy-client.key
	I1202 16:18:34.101308  632702 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099.pem (1338 bytes)
	W1202 16:18:34.101340  632702 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099_empty.pem, impossibly tiny 0 bytes
	I1202 16:18:34.101352  632702 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem (1679 bytes)
	I1202 16:18:34.101378  632702 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem (1082 bytes)
	I1202 16:18:34.101403  632702 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem (1123 bytes)
	I1202 16:18:34.101452  632702 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem (1675 bytes)
	I1202 16:18:34.101498  632702 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem (1708 bytes)
	I1202 16:18:34.102143  632702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 16:18:34.120751  632702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 16:18:34.139730  632702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 16:18:34.161166  632702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 16:18:34.184970  632702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 16:18:34.204608  632702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 16:18:34.222591  632702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 16:18:34.240258  632702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 16:18:34.258300  632702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem --> /usr/share/ca-certificates/2680992.pem (1708 bytes)
	I1202 16:18:34.275854  632702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 16:18:34.294661  632702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099.pem --> /usr/share/ca-certificates/268099.pem (1338 bytes)
	I1202 16:18:34.314089  632702 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 16:18:34.327008  632702 ssh_runner.go:195] Run: openssl version
	I1202 16:18:34.333045  632702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 16:18:34.341352  632702 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:18:34.344973  632702 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 15:16 /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:18:34.345025  632702 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:18:34.380726  632702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 16:18:34.389203  632702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/268099.pem && ln -fs /usr/share/ca-certificates/268099.pem /etc/ssl/certs/268099.pem"
	I1202 16:18:34.398058  632702 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/268099.pem
	I1202 16:18:34.401858  632702 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 15:33 /usr/share/ca-certificates/268099.pem
	I1202 16:18:34.401913  632702 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/268099.pem
	I1202 16:18:34.435630  632702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/268099.pem /etc/ssl/certs/51391683.0"
	I1202 16:18:34.444261  632702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2680992.pem && ln -fs /usr/share/ca-certificates/2680992.pem /etc/ssl/certs/2680992.pem"
	I1202 16:18:34.452730  632702 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2680992.pem
	I1202 16:18:34.456493  632702 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 15:33 /usr/share/ca-certificates/2680992.pem
	I1202 16:18:34.456538  632702 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2680992.pem
	I1202 16:18:34.492029  632702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2680992.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 16:18:34.500458  632702 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 16:18:34.504312  632702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 16:18:34.538255  632702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 16:18:34.572311  632702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 16:18:34.607698  632702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 16:18:34.650354  632702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 16:18:34.697324  632702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 16:18:34.748663  632702 kubeadm.go:401] StartCluster: {Name:newest-cni-682353 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-682353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 16:18:34.748785  632702 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 16:18:34.748868  632702 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 16:18:34.786306  632702 cri.go:89] found id: "cb312f13091c58bd72d656a9744dbd05f9804fdf67c95588b994fb7a3c8a08b7"
	I1202 16:18:34.786329  632702 cri.go:89] found id: "c8f017ed73870ab02759b08f235ff372e1d39e18f2cba24a7dc958208be38f45"
	I1202 16:18:34.786336  632702 cri.go:89] found id: "637f7511012f268dc11abb2bdb14e8541a010a8282803345662aee9434c58f91"
	I1202 16:18:34.786340  632702 cri.go:89] found id: "0d367e0e69f0e7e85292b0ba7c75a0d708dac3e3ee3b2f01dc0c4ea1736b98fc"
	I1202 16:18:34.786345  632702 cri.go:89] found id: ""
	I1202 16:18:34.786393  632702 ssh_runner.go:195] Run: sudo runc list -f json
	W1202 16:18:34.798709  632702 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T16:18:34Z" level=error msg="open /run/runc: no such file or directory"
	I1202 16:18:34.798788  632702 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 16:18:34.806965  632702 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 16:18:34.806987  632702 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 16:18:34.807039  632702 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 16:18:34.814989  632702 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 16:18:34.815494  632702 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-682353" does not appear in /home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 16:18:34.815654  632702 kubeconfig.go:62] /home/jenkins/minikube-integration/22021-264555/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-682353" cluster setting kubeconfig missing "newest-cni-682353" context setting]
	I1202 16:18:34.816046  632702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/kubeconfig: {Name:mk809d3f43352510256b48d000241cc8ee13f80d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:18:34.817669  632702 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 16:18:34.825062  632702 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1202 16:18:34.825089  632702 kubeadm.go:602] duration metric: took 18.088181ms to restartPrimaryControlPlane
	I1202 16:18:34.825100  632702 kubeadm.go:403] duration metric: took 76.448812ms to StartCluster
	I1202 16:18:34.825116  632702 settings.go:142] acquiring lock: {Name:mkb00b5395affa5a80ee09f21cfed53b1afcd59c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:18:34.825187  632702 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 16:18:34.825704  632702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/kubeconfig: {Name:mk809d3f43352510256b48d000241cc8ee13f80d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:18:34.825935  632702 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 16:18:34.826057  632702 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 16:18:34.826146  632702 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-682353"
	I1202 16:18:34.826156  632702 config.go:182] Loaded profile config "newest-cni-682353": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 16:18:34.826169  632702 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-682353"
	W1202 16:18:34.826178  632702 addons.go:248] addon storage-provisioner should already be in state true
	I1202 16:18:34.826183  632702 addons.go:70] Setting dashboard=true in profile "newest-cni-682353"
	I1202 16:18:34.826206  632702 host.go:66] Checking if "newest-cni-682353" exists ...
	I1202 16:18:34.826212  632702 addons.go:239] Setting addon dashboard=true in "newest-cni-682353"
	W1202 16:18:34.826224  632702 addons.go:248] addon dashboard should already be in state true
	I1202 16:18:34.826260  632702 host.go:66] Checking if "newest-cni-682353" exists ...
	I1202 16:18:34.826183  632702 addons.go:70] Setting default-storageclass=true in profile "newest-cni-682353"
	I1202 16:18:34.826304  632702 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-682353"
	I1202 16:18:34.826601  632702 cli_runner.go:164] Run: docker container inspect newest-cni-682353 --format={{.State.Status}}
	I1202 16:18:34.826669  632702 cli_runner.go:164] Run: docker container inspect newest-cni-682353 --format={{.State.Status}}
	I1202 16:18:34.826748  632702 cli_runner.go:164] Run: docker container inspect newest-cni-682353 --format={{.State.Status}}
	I1202 16:18:34.828960  632702 out.go:179] * Verifying Kubernetes components...
	I1202 16:18:34.830217  632702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 16:18:34.851713  632702 addons.go:239] Setting addon default-storageclass=true in "newest-cni-682353"
	W1202 16:18:34.851738  632702 addons.go:248] addon default-storageclass should already be in state true
	I1202 16:18:34.851765  632702 host.go:66] Checking if "newest-cni-682353" exists ...
	I1202 16:18:34.852195  632702 cli_runner.go:164] Run: docker container inspect newest-cni-682353 --format={{.State.Status}}
	I1202 16:18:34.852976  632702 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1202 16:18:34.852977  632702 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 16:18:34.854295  632702 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 16:18:34.854319  632702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 16:18:34.854363  632702 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1202 16:18:34.854372  632702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:18:34.855457  632702 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1202 16:18:34.855481  632702 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1202 16:18:34.855540  632702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:18:34.887571  632702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33265 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/newest-cni-682353/id_rsa Username:docker}
	I1202 16:18:34.889546  632702 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 16:18:34.889575  632702 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 16:18:34.889649  632702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:18:34.894669  632702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33265 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/newest-cni-682353/id_rsa Username:docker}
	I1202 16:18:34.911859  632702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33265 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/newest-cni-682353/id_rsa Username:docker}
	I1202 16:18:34.989413  632702 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 16:18:35.007363  632702 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1202 16:18:35.007410  632702 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1202 16:18:35.007592  632702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 16:18:35.007876  632702 api_server.go:52] waiting for apiserver process to appear ...
	I1202 16:18:35.007938  632702 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 16:18:35.022994  632702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 16:18:35.023019  632702 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1202 16:18:35.023040  632702 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1202 16:18:35.037128  632702 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1202 16:18:35.037156  632702 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1202 16:18:35.051762  632702 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1202 16:18:35.051783  632702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1202 16:18:35.065496  632702 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1202 16:18:35.065528  632702 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1202 16:18:35.080881  632702 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1202 16:18:35.080914  632702 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1202 16:18:35.094143  632702 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1202 16:18:35.094167  632702 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1202 16:18:35.106720  632702 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1202 16:18:35.106746  632702 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1202 16:18:35.119818  632702 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 16:18:35.119844  632702 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1202 16:18:35.132843  632702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 16:18:37.146337  632702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.138701206s)
	I1202 16:18:37.146386  632702 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.138423904s)
	I1202 16:18:37.146438  632702 api_server.go:72] duration metric: took 2.320441896s to wait for apiserver process to appear ...
	I1202 16:18:37.146447  632702 api_server.go:88] waiting for apiserver healthz status ...
	I1202 16:18:37.146450  632702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.123427863s)
	I1202 16:18:37.146470  632702 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1202 16:18:37.146639  632702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.013762349s)
	I1202 16:18:37.148174  632702 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-682353 addons enable metrics-server
	
	I1202 16:18:37.150970  632702 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 16:18:37.150995  632702 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 16:18:37.159969  632702 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1202 16:18:37.161127  632702 addons.go:530] duration metric: took 2.335078473s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1202 16:18:37.646850  632702 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1202 16:18:37.651399  632702 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 16:18:37.651444  632702 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 16:18:38.147130  632702 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1202 16:18:38.151058  632702 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1202 16:18:38.152035  632702 api_server.go:141] control plane version: v1.35.0-beta.0
	I1202 16:18:38.152059  632702 api_server.go:131] duration metric: took 1.005605082s to wait for apiserver health ...
	I1202 16:18:38.152068  632702 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 16:18:38.155711  632702 system_pods.go:59] 8 kube-system pods found
	I1202 16:18:38.155748  632702 system_pods.go:61] "coredns-7d764666f9-jb9wz" [889f4af6-e976-4ec7-ae6e-ed5ec813fe4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1202 16:18:38.155758  632702 system_pods.go:61] "etcd-newest-cni-682353" [5ab9fd7e-9c55-45a2-ac07-46d797be98d1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 16:18:38.155767  632702 system_pods.go:61] "kindnet-cxfrf" [164fac47-6c74-434b-b780-1ba1c2a40495] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1202 16:18:38.155774  632702 system_pods.go:61] "kube-apiserver-newest-cni-682353" [df312caa-500b-4c0b-bda0-f8acafcff8b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 16:18:38.155782  632702 system_pods.go:61] "kube-controller-manager-newest-cni-682353" [17765d5c-8f15-40da-886f-c807519c7e05] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 16:18:38.155789  632702 system_pods.go:61] "kube-proxy-srq78" [6d9b68b3-fb87-47f4-887a-3b1851999e6c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1202 16:18:38.155794  632702 system_pods.go:61] "kube-scheduler-newest-cni-682353" [6b53974a-1f7e-4d8a-bae6-24aa797c54d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 16:18:38.155800  632702 system_pods.go:61] "storage-provisioner" [c5d388c9-2f39-4c65-8e57-7846b28c1db8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1202 16:18:38.155806  632702 system_pods.go:74] duration metric: took 3.732484ms to wait for pod list to return data ...
	I1202 16:18:38.155815  632702 default_sa.go:34] waiting for default service account to be created ...
	I1202 16:18:38.157998  632702 default_sa.go:45] found service account: "default"
	I1202 16:18:38.158016  632702 default_sa.go:55] duration metric: took 2.195211ms for default service account to be created ...
	I1202 16:18:38.158026  632702 kubeadm.go:587] duration metric: took 3.332049235s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1202 16:18:38.158040  632702 node_conditions.go:102] verifying NodePressure condition ...
	I1202 16:18:38.160245  632702 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 16:18:38.160263  632702 node_conditions.go:123] node cpu capacity is 8
	I1202 16:18:38.160276  632702 node_conditions.go:105] duration metric: took 2.232934ms to run NodePressure ...
	I1202 16:18:38.160287  632702 start.go:242] waiting for startup goroutines ...
	I1202 16:18:38.160293  632702 start.go:247] waiting for cluster config update ...
	I1202 16:18:38.160306  632702 start.go:256] writing updated cluster config ...
	I1202 16:18:38.160581  632702 ssh_runner.go:195] Run: rm -f paused
	I1202 16:18:38.210325  632702 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1202 16:18:38.212415  632702 out.go:179] * Done! kubectl is now configured to use "newest-cni-682353" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.4761285Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.478683679Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=32b9b38d-7a10-4b82-b216-31cc13968154 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.479283522Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=227ebc54-cce0-45d2-8ca2-cd0035b30565 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.480113277Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.480507216Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.480732083Z" level=info msg="Ran pod sandbox 385fd86e3a4791076f858c9038c5eabcf60eece27645dadf02b40e5e1ea2c2b4 with infra container: kube-system/kindnet-cxfrf/POD" id=32b9b38d-7a10-4b82-b216-31cc13968154 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.481189568Z" level=info msg="Ran pod sandbox 7ba89cb394156b553aa049542bef7bc414c4e6b77a966253691edcad3b0f1a56 with infra container: kube-system/kube-proxy-srq78/POD" id=227ebc54-cce0-45d2-8ca2-cd0035b30565 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.481856791Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=a33dd7a2-f06d-49a4-a732-8f699623c167 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.482167702Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=15ebb4be-f87c-4cf5-901a-c52514364122 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.482781659Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=ae835134-3799-44d4-86d0-bb3d433f4f03 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.483013363Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=6d55e3fd-a930-49de-9bd2-c9cf366a464b name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.483879732Z" level=info msg="Creating container: kube-system/kube-proxy-srq78/kube-proxy" id=5832e488-4dd4-459f-8e07-adac26986fa8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.483944604Z" level=info msg="Creating container: kube-system/kindnet-cxfrf/kindnet-cni" id=6892e807-9aa7-4eba-892b-7e0c10bdb3f3 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.483977092Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.484015141Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.487792258Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.488281267Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.488282249Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.488844313Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.516642673Z" level=info msg="Created container 91fc3c64a25e3fda17bac57736962354faa703cde5738eaae25bd35fa4c465c3: kube-system/kindnet-cxfrf/kindnet-cni" id=6892e807-9aa7-4eba-892b-7e0c10bdb3f3 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.517231254Z" level=info msg="Starting container: 91fc3c64a25e3fda17bac57736962354faa703cde5738eaae25bd35fa4c465c3" id=f8e94209-a2e4-4c2d-aa68-b7747892eaf5 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.518916695Z" level=info msg="Started container" PID=1032 containerID=91fc3c64a25e3fda17bac57736962354faa703cde5738eaae25bd35fa4c465c3 description=kube-system/kindnet-cxfrf/kindnet-cni id=f8e94209-a2e4-4c2d-aa68-b7747892eaf5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=385fd86e3a4791076f858c9038c5eabcf60eece27645dadf02b40e5e1ea2c2b4
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.519558707Z" level=info msg="Created container 4e5a1f21c78595fd7dc11cf79c1d7485a73ccb9864314205ffa58d85454752b5: kube-system/kube-proxy-srq78/kube-proxy" id=5832e488-4dd4-459f-8e07-adac26986fa8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.520153669Z" level=info msg="Starting container: 4e5a1f21c78595fd7dc11cf79c1d7485a73ccb9864314205ffa58d85454752b5" id=65c9d276-a9a5-406d-b8a9-8511cd360de1 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.522959845Z" level=info msg="Started container" PID=1033 containerID=4e5a1f21c78595fd7dc11cf79c1d7485a73ccb9864314205ffa58d85454752b5 description=kube-system/kube-proxy-srq78/kube-proxy id=65c9d276-a9a5-406d-b8a9-8511cd360de1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7ba89cb394156b553aa049542bef7bc414c4e6b77a966253691edcad3b0f1a56
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	91fc3c64a25e3       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   4 seconds ago       Running             kindnet-cni               1                   385fd86e3a479       kindnet-cxfrf                               kube-system
	4e5a1f21c7859       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   4 seconds ago       Running             kube-proxy                1                   7ba89cb394156       kube-proxy-srq78                            kube-system
	cb312f13091c5       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   6 seconds ago       Running             etcd                      1                   4c5f6eabab082       etcd-newest-cni-682353                      kube-system
	c8f017ed73870       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   6 seconds ago       Running             kube-apiserver            1                   b41520b6aeb24       kube-apiserver-newest-cni-682353            kube-system
	637f7511012f2       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   6 seconds ago       Running             kube-controller-manager   1                   7548eff638644       kube-controller-manager-newest-cni-682353   kube-system
	0d367e0e69f0e       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   6 seconds ago       Running             kube-scheduler            1                   8d9e341e48798       kube-scheduler-newest-cni-682353            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-682353
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-682353
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=newest-cni-682353
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T16_18_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 16:18:13 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-682353
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 16:18:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 16:18:36 +0000   Tue, 02 Dec 2025 16:18:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 16:18:36 +0000   Tue, 02 Dec 2025 16:18:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 16:18:36 +0000   Tue, 02 Dec 2025 16:18:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 02 Dec 2025 16:18:36 +0000   Tue, 02 Dec 2025 16:18:11 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-682353
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                33d4fe74-dbd2-4001-8121-c4f8c133d3ca
	  Boot ID:                    e00bac56-b076-4861-bc22-5d3b11269f73
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-682353                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         26s
	  kube-system                 kindnet-cxfrf                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      21s
	  kube-system                 kube-apiserver-newest-cni-682353             250m (3%)     0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-controller-manager-newest-cni-682353    200m (2%)     0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-proxy-srq78                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         21s
	  kube-system                 kube-scheduler-newest-cni-682353             100m (1%)     0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  22s   node-controller  Node newest-cni-682353 event: Registered Node newest-cni-682353 in Controller
	  Normal  RegisteredNode  2s    node-controller  Node newest-cni-682353 event: Registered Node newest-cni-682353 in Controller
	
	
	==> dmesg <==
	[  +0.000023] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[Dec 2 16:14] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ca bc 15 8e 4f 39 08 06
	[  +0.202375] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4a 25 86 21 45 76 08 06
	[  +7.441346] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 50 97 74 77 f9 08 06
	[  +0.000311] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 8c 8a 4d de f7 08 06
	[Dec 2 16:15] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 87 56 d2 46 1b 08 06
	[  +0.000909] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4a 25 86 21 45 76 08 06
	[  +7.449328] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a 06 ef 04 0a 22 08 06
	[ +17.731920] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff ae 8e 5c 48 83 60 08 06
	[  +2.165442] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0e 0b db fb 54 af 08 06
	[  +0.000320] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 3a 06 ef 04 0a 22 08 06
	[ +14.651928] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 5d 2d 15 78 ec 08 06
	[  +0.000385] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 8e 5c 48 83 60 08 06
	
	
	==> etcd [cb312f13091c58bd72d656a9744dbd05f9804fdf67c95588b994fb7a3c8a08b7] <==
	{"level":"warn","ts":"2025-12-02T16:18:36.028135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:18:36.034435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:18:36.047569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:18:36.054454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:18:36.060618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:18:36.067436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:18:36.073876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:18:36.080023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:18:36.086349Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:18:36.093038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:18:36.103573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:18:36.110038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:18:36.116389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:18:36.124219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:18:36.131013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:18:36.137322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:18:36.145170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:18:36.158093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:18:36.164647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:18:36.177602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:18:36.181011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:18:36.188696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:18:36.195263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:18:36.201858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:18:36.254332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43106","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 16:18:41 up  3:01,  0 user,  load average: 3.25, 3.91, 2.71
	Linux newest-cni-682353 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [91fc3c64a25e3fda17bac57736962354faa703cde5738eaae25bd35fa4c465c3] <==
	I1202 16:18:37.634100       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 16:18:37.728720       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1202 16:18:37.728871       1 main.go:148] setting mtu 1500 for CNI 
	I1202 16:18:37.728895       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 16:18:37.728919       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T16:18:37Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 16:18:37.928807       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 16:18:37.928988       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 16:18:37.929020       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 16:18:37.929253       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1202 16:18:38.329160       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 16:18:38.329192       1 metrics.go:72] Registering metrics
	I1202 16:18:38.329255       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [c8f017ed73870ab02759b08f235ff372e1d39e18f2cba24a7dc958208be38f45] <==
	I1202 16:18:36.708024       1 autoregister_controller.go:144] Starting autoregister controller
	I1202 16:18:36.708031       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:36.708031       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1202 16:18:36.708080       1 cache.go:39] Caches are synced for autoregister controller
	I1202 16:18:36.708123       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1202 16:18:36.708030       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1202 16:18:36.708211       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1202 16:18:36.708233       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1202 16:18:36.708044       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1202 16:18:36.708531       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:36.713685       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:36.713709       1 policy_source.go:248] refreshing policies
	I1202 16:18:36.714194       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1202 16:18:36.750294       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 16:18:36.956487       1 controller.go:667] quota admission added evaluator for: namespaces
	I1202 16:18:36.984966       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1202 16:18:37.008265       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 16:18:37.016414       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 16:18:37.023474       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 16:18:37.060822       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.240.134"}
	I1202 16:18:37.072032       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.170.34"}
	I1202 16:18:37.610944       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1202 16:18:40.320301       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1202 16:18:40.371491       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 16:18:40.420373       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [637f7511012f268dc11abb2bdb14e8541a010a8282803345662aee9434c58f91] <==
	I1202 16:18:39.882400       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:39.882464       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:39.882551       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:39.882604       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:39.882633       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:39.882653       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:39.883272       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:39.883862       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:39.887508       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:39.887580       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:39.887618       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:39.887744       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:39.888619       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:39.888641       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:39.888675       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:39.888695       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:39.888717       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:39.888729       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:39.888767       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:39.888843       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:39.888859       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1202 16:18:39.888865       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1202 16:18:39.888810       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:39.888736       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:39.979875       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [4e5a1f21c78595fd7dc11cf79c1d7485a73ccb9864314205ffa58d85454752b5] <==
	I1202 16:18:37.555441       1 server_linux.go:53] "Using iptables proxy"
	I1202 16:18:37.619588       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 16:18:37.719733       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:37.719796       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1202 16:18:37.719912       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 16:18:37.741976       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 16:18:37.742048       1 server_linux.go:136] "Using iptables Proxier"
	I1202 16:18:37.748089       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 16:18:37.748594       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1202 16:18:37.748639       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 16:18:37.750278       1 config.go:309] "Starting node config controller"
	I1202 16:18:37.750354       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 16:18:37.750367       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 16:18:37.750394       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 16:18:37.750399       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 16:18:37.750438       1 config.go:200] "Starting service config controller"
	I1202 16:18:37.750446       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 16:18:37.750467       1 config.go:106] "Starting endpoint slice config controller"
	I1202 16:18:37.750472       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 16:18:37.850579       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1202 16:18:37.850592       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 16:18:37.850613       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [0d367e0e69f0e7e85292b0ba7c75a0d708dac3e3ee3b2f01dc0c4ea1736b98fc] <==
	I1202 16:18:35.067773       1 serving.go:386] Generated self-signed cert in-memory
	W1202 16:18:36.624279       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1202 16:18:36.624310       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1202 16:18:36.624321       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1202 16:18:36.624332       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1202 16:18:36.675284       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1202 16:18:36.675403       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 16:18:36.679679       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1202 16:18:36.679894       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 16:18:36.680763       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 16:18:36.679920       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1202 16:18:36.782620       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 02 16:18:36 newest-cni-682353 kubelet[660]: I1202 16:18:36.743495     660 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 02 16:18:36 newest-cni-682353 kubelet[660]: E1202 16:18:36.783447     660 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-682353\" already exists" pod="kube-system/etcd-newest-cni-682353"
	Dec 02 16:18:36 newest-cni-682353 kubelet[660]: I1202 16:18:36.783490     660 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-682353"
	Dec 02 16:18:36 newest-cni-682353 kubelet[660]: E1202 16:18:36.791283     660 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-682353\" already exists" pod="kube-system/kube-apiserver-newest-cni-682353"
	Dec 02 16:18:36 newest-cni-682353 kubelet[660]: I1202 16:18:36.791329     660 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-682353"
	Dec 02 16:18:36 newest-cni-682353 kubelet[660]: E1202 16:18:36.798589     660 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-682353\" already exists" pod="kube-system/kube-controller-manager-newest-cni-682353"
	Dec 02 16:18:36 newest-cni-682353 kubelet[660]: I1202 16:18:36.798786     660 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-682353"
	Dec 02 16:18:36 newest-cni-682353 kubelet[660]: E1202 16:18:36.805350     660 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-682353\" already exists" pod="kube-system/kube-scheduler-newest-cni-682353"
	Dec 02 16:18:37 newest-cni-682353 kubelet[660]: I1202 16:18:37.165434     660 apiserver.go:52] "Watching apiserver"
	Dec 02 16:18:37 newest-cni-682353 kubelet[660]: E1202 16:18:37.171770     660 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-682353" containerName="kube-controller-manager"
	Dec 02 16:18:37 newest-cni-682353 kubelet[660]: E1202 16:18:37.212132     660 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-682353" containerName="kube-scheduler"
	Dec 02 16:18:37 newest-cni-682353 kubelet[660]: I1202 16:18:37.212173     660 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-682353"
	Dec 02 16:18:37 newest-cni-682353 kubelet[660]: E1202 16:18:37.212352     660 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-682353" containerName="kube-apiserver"
	Dec 02 16:18:37 newest-cni-682353 kubelet[660]: E1202 16:18:37.219326     660 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-682353\" already exists" pod="kube-system/etcd-newest-cni-682353"
	Dec 02 16:18:37 newest-cni-682353 kubelet[660]: E1202 16:18:37.219410     660 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-682353" containerName="etcd"
	Dec 02 16:18:37 newest-cni-682353 kubelet[660]: I1202 16:18:37.272830     660 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 02 16:18:37 newest-cni-682353 kubelet[660]: I1202 16:18:37.273704     660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/164fac47-6c74-434b-b780-1ba1c2a40495-cni-cfg\") pod \"kindnet-cxfrf\" (UID: \"164fac47-6c74-434b-b780-1ba1c2a40495\") " pod="kube-system/kindnet-cxfrf"
	Dec 02 16:18:37 newest-cni-682353 kubelet[660]: I1202 16:18:37.273750     660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/164fac47-6c74-434b-b780-1ba1c2a40495-xtables-lock\") pod \"kindnet-cxfrf\" (UID: \"164fac47-6c74-434b-b780-1ba1c2a40495\") " pod="kube-system/kindnet-cxfrf"
	Dec 02 16:18:37 newest-cni-682353 kubelet[660]: I1202 16:18:37.273798     660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d9b68b3-fb87-47f4-887a-3b1851999e6c-xtables-lock\") pod \"kube-proxy-srq78\" (UID: \"6d9b68b3-fb87-47f4-887a-3b1851999e6c\") " pod="kube-system/kube-proxy-srq78"
	Dec 02 16:18:37 newest-cni-682353 kubelet[660]: I1202 16:18:37.273836     660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/164fac47-6c74-434b-b780-1ba1c2a40495-lib-modules\") pod \"kindnet-cxfrf\" (UID: \"164fac47-6c74-434b-b780-1ba1c2a40495\") " pod="kube-system/kindnet-cxfrf"
	Dec 02 16:18:37 newest-cni-682353 kubelet[660]: I1202 16:18:37.273868     660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d9b68b3-fb87-47f4-887a-3b1851999e6c-lib-modules\") pod \"kube-proxy-srq78\" (UID: \"6d9b68b3-fb87-47f4-887a-3b1851999e6c\") " pod="kube-system/kube-proxy-srq78"
	Dec 02 16:18:38 newest-cni-682353 kubelet[660]: E1202 16:18:38.217510     660 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-682353" containerName="etcd"
	Dec 02 16:18:39 newest-cni-682353 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 02 16:18:39 newest-cni-682353 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 02 16:18:39 newest-cni-682353 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-682353 -n newest-cni-682353
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-682353 -n newest-cni-682353: exit status 2 (327.50653ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-682353 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-7d764666f9-jb9wz storage-provisioner dashboard-metrics-scraper-867fb5f87b-9m4mf kubernetes-dashboard-b84665fb8-vh2pr
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-682353 describe pod coredns-7d764666f9-jb9wz storage-provisioner dashboard-metrics-scraper-867fb5f87b-9m4mf kubernetes-dashboard-b84665fb8-vh2pr
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-682353 describe pod coredns-7d764666f9-jb9wz storage-provisioner dashboard-metrics-scraper-867fb5f87b-9m4mf kubernetes-dashboard-b84665fb8-vh2pr: exit status 1 (61.985384ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-jb9wz" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-9m4mf" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-vh2pr" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-682353 describe pod coredns-7d764666f9-jb9wz storage-provisioner dashboard-metrics-scraper-867fb5f87b-9m4mf kubernetes-dashboard-b84665fb8-vh2pr: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-682353
helpers_test.go:243: (dbg) docker inspect newest-cni-682353:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a775ae5be075602a44fc7a11cf75bd1c7c4a445ea81c87d52e44f9d45bffd188",
	        "Created": "2025-12-02T16:17:54.498495762Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 633179,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T16:18:27.930331363Z",
	            "FinishedAt": "2025-12-02T16:18:26.427980635Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/a775ae5be075602a44fc7a11cf75bd1c7c4a445ea81c87d52e44f9d45bffd188/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a775ae5be075602a44fc7a11cf75bd1c7c4a445ea81c87d52e44f9d45bffd188/hostname",
	        "HostsPath": "/var/lib/docker/containers/a775ae5be075602a44fc7a11cf75bd1c7c4a445ea81c87d52e44f9d45bffd188/hosts",
	        "LogPath": "/var/lib/docker/containers/a775ae5be075602a44fc7a11cf75bd1c7c4a445ea81c87d52e44f9d45bffd188/a775ae5be075602a44fc7a11cf75bd1c7c4a445ea81c87d52e44f9d45bffd188-json.log",
	        "Name": "/newest-cni-682353",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-682353:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-682353",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a775ae5be075602a44fc7a11cf75bd1c7c4a445ea81c87d52e44f9d45bffd188",
	                "LowerDir": "/var/lib/docker/overlay2/1f044ac655221173ad21f42de851de89b4294bf00ed7588a758e1c216c20f865-init/diff:/var/lib/docker/overlay2/ab98578cee54140c21ba2edb7c02601b9799fbaa027f05ce4daaae66d198c082/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1f044ac655221173ad21f42de851de89b4294bf00ed7588a758e1c216c20f865/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1f044ac655221173ad21f42de851de89b4294bf00ed7588a758e1c216c20f865/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1f044ac655221173ad21f42de851de89b4294bf00ed7588a758e1c216c20f865/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-682353",
	                "Source": "/var/lib/docker/volumes/newest-cni-682353/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-682353",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-682353",
	                "name.minikube.sigs.k8s.io": "newest-cni-682353",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "5c996ddee384c86b803e9da8fb4c2828816a3945d28d26f546a9884da940103e",
	            "SandboxKey": "/var/run/docker/netns/5c996ddee384",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33265"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33266"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33269"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33267"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33268"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-682353": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3ac149f6cf88728cc866ee4dd469920e42598af4e720f482a4a4ddfe77f5ff8f",
	                    "EndpointID": "16af9bb749947478b502ce2264f378996e2c4a55af0b7e078bb8219dd485c9b5",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "22:d2:b1:88:0f:d3",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-682353",
	                        "a775ae5be075"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-682353 -n newest-cni-682353
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-682353 -n newest-cni-682353: exit status 2 (339.171541ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-682353 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable dashboard -p default-k8s-diff-port-806420 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ start   │ -p default-k8s-diff-port-806420 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:18 UTC │
	│ image   │ old-k8s-version-380588 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ pause   │ -p old-k8s-version-380588 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │                     │
	│ image   │ no-preload-534842 image list --format=json                                                                                                                                                                                                           │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ pause   │ -p no-preload-534842 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │                     │
	│ delete  │ -p old-k8s-version-380588                                                                                                                                                                                                                            │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ delete  │ -p no-preload-534842                                                                                                                                                                                                                                 │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ delete  │ -p old-k8s-version-380588                                                                                                                                                                                                                            │ old-k8s-version-380588       │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ start   │ -p newest-cni-682353 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-682353            │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:18 UTC │
	│ delete  │ -p no-preload-534842                                                                                                                                                                                                                                 │ no-preload-534842            │ jenkins │ v1.37.0 │ 02 Dec 25 16:17 UTC │ 02 Dec 25 16:17 UTC │
	│ image   │ embed-certs-046271 image list --format=json                                                                                                                                                                                                          │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │ 02 Dec 25 16:18 UTC │
	│ pause   │ -p embed-certs-046271 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │                     │
	│ delete  │ -p embed-certs-046271                                                                                                                                                                                                                                │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │ 02 Dec 25 16:18 UTC │
	│ addons  │ enable metrics-server -p newest-cni-682353 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-682353            │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │                     │
	│ delete  │ -p embed-certs-046271                                                                                                                                                                                                                                │ embed-certs-046271           │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │ 02 Dec 25 16:18 UTC │
	│ stop    │ -p newest-cni-682353 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-682353            │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │ 02 Dec 25 16:18 UTC │
	│ image   │ default-k8s-diff-port-806420 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │ 02 Dec 25 16:18 UTC │
	│ pause   │ -p default-k8s-diff-port-806420 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-682353 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-682353            │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │ 02 Dec 25 16:18 UTC │
	│ start   │ -p newest-cni-682353 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-682353            │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │ 02 Dec 25 16:18 UTC │
	│ delete  │ -p default-k8s-diff-port-806420                                                                                                                                                                                                                      │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │ 02 Dec 25 16:18 UTC │
	│ delete  │ -p default-k8s-diff-port-806420                                                                                                                                                                                                                      │ default-k8s-diff-port-806420 │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │ 02 Dec 25 16:18 UTC │
	│ image   │ newest-cni-682353 image list --format=json                                                                                                                                                                                                           │ newest-cni-682353            │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │ 02 Dec 25 16:18 UTC │
	│ pause   │ -p newest-cni-682353 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-682353            │ jenkins │ v1.37.0 │ 02 Dec 25 16:18 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 16:18:27
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 16:18:27.096789  632702 out.go:360] Setting OutFile to fd 1 ...
	I1202 16:18:27.096907  632702 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:18:27.096916  632702 out.go:374] Setting ErrFile to fd 2...
	I1202 16:18:27.096920  632702 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:18:27.097170  632702 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 16:18:27.097655  632702 out.go:368] Setting JSON to false
	I1202 16:18:27.098723  632702 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":10848,"bootTime":1764681459,"procs":263,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 16:18:27.098783  632702 start.go:143] virtualization: kvm guest
	I1202 16:18:27.100925  632702 out.go:179] * [newest-cni-682353] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 16:18:27.102297  632702 notify.go:221] Checking for updates...
	I1202 16:18:27.102310  632702 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 16:18:27.103490  632702 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 16:18:27.104790  632702 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 16:18:27.105915  632702 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-264555/.minikube
	I1202 16:18:27.106974  632702 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 16:18:27.108111  632702 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 16:18:27.109658  632702 config.go:182] Loaded profile config "newest-cni-682353": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 16:18:27.110228  632702 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 16:18:27.133604  632702 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 16:18:27.133775  632702 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 16:18:27.200970  632702 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 16:18:27.188090505 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 16:18:27.201072  632702 docker.go:319] overlay module found
	I1202 16:18:27.203028  632702 out.go:179] * Using the docker driver based on existing profile
	I1202 16:18:27.204448  632702 start.go:309] selected driver: docker
	I1202 16:18:27.204470  632702 start.go:927] validating driver "docker" against &{Name:newest-cni-682353 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-682353 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 16:18:27.204550  632702 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 16:18:27.205059  632702 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 16:18:27.272023  632702 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 16:18:27.259665242 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 16:18:27.272666  632702 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1202 16:18:27.272716  632702 cni.go:84] Creating CNI manager for ""
	I1202 16:18:27.272787  632702 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 16:18:27.272867  632702 start.go:353] cluster config:
	{Name:newest-cni-682353 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-682353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 16:18:27.277019  632702 out.go:179] * Starting "newest-cni-682353" primary control-plane node in "newest-cni-682353" cluster
	I1202 16:18:27.278537  632702 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 16:18:27.279898  632702 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 16:18:27.280976  632702 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 16:18:27.281047  632702 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 16:18:27.305155  632702 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 16:18:27.305175  632702 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 16:18:27.866989  632702 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	W1202 16:18:27.881613  632702 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1202 16:18:27.881752  632702 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/config.json ...
	I1202 16:18:27.881859  632702 cache.go:107] acquiring lock: {Name:mk821cef64e8468a2739d03d2e1019ac980bf2cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:18:27.881904  632702 cache.go:107] acquiring lock: {Name:mkce5d795e0ca01a9ee3d674d001cd6e04bbbfba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:18:27.881880  632702 cache.go:107] acquiring lock: {Name:mk3f4d40fdf359ce0573637a386f14c0a310cdc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:18:27.881933  632702 cache.go:107] acquiring lock: {Name:mkec45cdfdbdafc0ef1296b9d77662a50add1cdf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:18:27.881857  632702 cache.go:107] acquiring lock: {Name:mk6b8eeb5270fa67a5a87f892f37de1ae4805f75 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:18:27.881939  632702 cache.go:107] acquiring lock: {Name:mka2aa325920dfb2720f9036278856e8dac95446 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:18:27.881982  632702 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1202 16:18:27.881987  632702 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 16:18:27.882001  632702 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 138.196µs
	I1202 16:18:27.882001  632702 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 97.896µs
	I1202 16:18:27.882003  632702 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 16:18:27.882001  632702 cache.go:107] acquiring lock: {Name:mk91bc91bcc535b3edd8200bf0c06e4d97781487 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:18:27.882022  632702 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 16:18:27.882017  632702 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 180.698µs
	I1202 16:18:27.881986  632702 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 16:18:27.882024  632702 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 16:18:27.882019  632702 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 16:18:27.882034  632702 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 198.789µs
	I1202 16:18:27.882043  632702 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 16:18:27.881959  632702 cache.go:107] acquiring lock: {Name:mk17b77bf762047097cbe060b18dc85ae78a9727 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:18:27.882057  632702 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 16:18:27.882072  632702 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 16:18:27.882079  632702 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 139.818µs
	I1202 16:18:27.882085  632702 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 16:18:27.882080  632702 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 146.002µs
	I1202 16:18:27.882040  632702 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 116.393µs
	I1202 16:18:27.882094  632702 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 16:18:27.882026  632702 cache.go:243] Successfully downloaded all kic artifacts
	I1202 16:18:27.882062  632702 cache.go:115] /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 16:18:27.882095  632702 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 16:18:27.882108  632702 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 112.685µs
	I1202 16:18:27.882116  632702 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 16:18:27.882030  632702 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 16:18:27.882127  632702 cache.go:87] Successfully saved all images to host disk.
	I1202 16:18:27.882132  632702 start.go:360] acquireMachinesLock for newest-cni-682353: {Name:mkfed8f02380af59f92aa0b6f8ae02a29dbe0c8e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 16:18:27.882157  632702 start.go:364] duration metric: took 15.081µs to acquireMachinesLock for "newest-cni-682353"
	I1202 16:18:27.882174  632702 start.go:96] Skipping create...Using existing machine configuration
	I1202 16:18:27.882187  632702 fix.go:54] fixHost starting: 
	I1202 16:18:27.882409  632702 cli_runner.go:164] Run: docker container inspect newest-cni-682353 --format={{.State.Status}}
	I1202 16:18:27.899829  632702 fix.go:112] recreateIfNeeded on newest-cni-682353: state=Stopped err=<nil>
	W1202 16:18:27.899862  632702 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 16:18:27.902536  632702 out.go:252] * Restarting existing docker container for "newest-cni-682353" ...
	I1202 16:18:27.902610  632702 cli_runner.go:164] Run: docker start newest-cni-682353
	I1202 16:18:28.167307  632702 cli_runner.go:164] Run: docker container inspect newest-cni-682353 --format={{.State.Status}}
	I1202 16:18:28.188833  632702 kic.go:430] container "newest-cni-682353" state is running.
	I1202 16:18:28.189309  632702 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-682353
	I1202 16:18:28.210140  632702 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/config.json ...
	I1202 16:18:28.210458  632702 machine.go:94] provisionDockerMachine start ...
	I1202 16:18:28.210557  632702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:18:28.230546  632702 main.go:143] libmachine: Using SSH client type: native
	I1202 16:18:28.230836  632702 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33265 <nil> <nil>}
	I1202 16:18:28.230849  632702 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 16:18:28.231650  632702 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33004->127.0.0.1:33265: read: connection reset by peer
	I1202 16:18:31.375476  632702 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-682353
	
	I1202 16:18:31.375503  632702 ubuntu.go:182] provisioning hostname "newest-cni-682353"
	I1202 16:18:31.375561  632702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:18:31.393858  632702 main.go:143] libmachine: Using SSH client type: native
	I1202 16:18:31.394130  632702 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33265 <nil> <nil>}
	I1202 16:18:31.394148  632702 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-682353 && echo "newest-cni-682353" | sudo tee /etc/hostname
	I1202 16:18:31.546381  632702 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-682353
	
	I1202 16:18:31.546520  632702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:18:31.565035  632702 main.go:143] libmachine: Using SSH client type: native
	I1202 16:18:31.565246  632702 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33265 <nil> <nil>}
	I1202 16:18:31.565262  632702 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-682353' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-682353/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-682353' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 16:18:31.704273  632702 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 16:18:31.704309  632702 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-264555/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-264555/.minikube}
	I1202 16:18:31.704334  632702 ubuntu.go:190] setting up certificates
	I1202 16:18:31.704350  632702 provision.go:84] configureAuth start
	I1202 16:18:31.704415  632702 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-682353
	I1202 16:18:31.722482  632702 provision.go:143] copyHostCerts
	I1202 16:18:31.722557  632702 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem, removing ...
	I1202 16:18:31.722574  632702 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem
	I1202 16:18:31.722658  632702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/ca.pem (1082 bytes)
	I1202 16:18:31.722781  632702 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem, removing ...
	I1202 16:18:31.722793  632702 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem
	I1202 16:18:31.722840  632702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/cert.pem (1123 bytes)
	I1202 16:18:31.722926  632702 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem, removing ...
	I1202 16:18:31.722935  632702 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem
	I1202 16:18:31.722981  632702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-264555/.minikube/key.pem (1675 bytes)
	I1202 16:18:31.723057  632702 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem org=jenkins.newest-cni-682353 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-682353]
	I1202 16:18:31.891698  632702 provision.go:177] copyRemoteCerts
	I1202 16:18:31.891783  632702 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 16:18:31.891835  632702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:18:31.909877  632702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33265 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/newest-cni-682353/id_rsa Username:docker}
	I1202 16:18:32.009102  632702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1202 16:18:32.026501  632702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 16:18:32.044121  632702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 16:18:32.061615  632702 provision.go:87] duration metric: took 357.250765ms to configureAuth
	I1202 16:18:32.061645  632702 ubuntu.go:206] setting minikube options for container-runtime
	I1202 16:18:32.061831  632702 config.go:182] Loaded profile config "newest-cni-682353": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 16:18:32.061950  632702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:18:32.080461  632702 main.go:143] libmachine: Using SSH client type: native
	I1202 16:18:32.080679  632702 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33265 <nil> <nil>}
	I1202 16:18:32.080695  632702 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 16:18:32.375082  632702 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 16:18:32.375109  632702 machine.go:97] duration metric: took 4.16462682s to provisionDockerMachine
	I1202 16:18:32.375121  632702 start.go:293] postStartSetup for "newest-cni-682353" (driver="docker")
	I1202 16:18:32.375134  632702 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 16:18:32.375197  632702 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 16:18:32.375236  632702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:18:32.393848  632702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33265 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/newest-cni-682353/id_rsa Username:docker}
	I1202 16:18:32.495325  632702 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 16:18:32.499380  632702 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 16:18:32.499409  632702 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 16:18:32.499443  632702 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-264555/.minikube/addons for local assets ...
	I1202 16:18:32.499503  632702 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-264555/.minikube/files for local assets ...
	I1202 16:18:32.499604  632702 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem -> 2680992.pem in /etc/ssl/certs
	I1202 16:18:32.499737  632702 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 16:18:32.508124  632702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem --> /etc/ssl/certs/2680992.pem (1708 bytes)
	I1202 16:18:32.527058  632702 start.go:296] duration metric: took 151.91868ms for postStartSetup
	I1202 16:18:32.527147  632702 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 16:18:32.527200  632702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:18:32.547309  632702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33265 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/newest-cni-682353/id_rsa Username:docker}
	I1202 16:18:32.646662  632702 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 16:18:32.651518  632702 fix.go:56] duration metric: took 4.769325714s for fixHost
	I1202 16:18:32.651541  632702 start.go:83] releasing machines lock for "newest-cni-682353", held for 4.769372545s
	I1202 16:18:32.651604  632702 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-682353
	I1202 16:18:32.670263  632702 ssh_runner.go:195] Run: cat /version.json
	I1202 16:18:32.670327  632702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:18:32.670341  632702 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 16:18:32.670448  632702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:18:32.690193  632702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33265 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/newest-cni-682353/id_rsa Username:docker}
	I1202 16:18:32.690321  632702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33265 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/newest-cni-682353/id_rsa Username:docker}
	I1202 16:18:32.843400  632702 ssh_runner.go:195] Run: systemctl --version
	I1202 16:18:32.849952  632702 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 16:18:32.889736  632702 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 16:18:32.895185  632702 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 16:18:32.895255  632702 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 16:18:32.906466  632702 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 16:18:32.906507  632702 start.go:496] detecting cgroup driver to use...
	I1202 16:18:32.906547  632702 detect.go:190] detected "systemd" cgroup driver on host os
	I1202 16:18:32.906704  632702 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 16:18:32.924596  632702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 16:18:32.940804  632702 docker.go:218] disabling cri-docker service (if available) ...
	I1202 16:18:32.940873  632702 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 16:18:32.960372  632702 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 16:18:32.977894  632702 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 16:18:33.070681  632702 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 16:18:33.155348  632702 docker.go:234] disabling docker service ...
	I1202 16:18:33.155430  632702 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 16:18:33.170699  632702 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 16:18:33.183267  632702 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 16:18:33.264888  632702 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 16:18:33.346350  632702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 16:18:33.358835  632702 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 16:18:33.373408  632702 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 16:18:33.373484  632702 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:18:33.382672  632702 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1202 16:18:33.382733  632702 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:18:33.392523  632702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:18:33.401478  632702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:18:33.410378  632702 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 16:18:33.418878  632702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:18:33.427684  632702 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:18:33.436322  632702 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 16:18:33.445202  632702 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 16:18:33.452752  632702 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 16:18:33.460417  632702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 16:18:33.539438  632702 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 16:18:33.695766  632702 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 16:18:33.695848  632702 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 16:18:33.701323  632702 start.go:564] Will wait 60s for crictl version
	I1202 16:18:33.701390  632702 ssh_runner.go:195] Run: which crictl
	I1202 16:18:33.705206  632702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 16:18:33.735601  632702 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 16:18:33.735697  632702 ssh_runner.go:195] Run: crio --version
	I1202 16:18:33.770986  632702 ssh_runner.go:195] Run: crio --version
	I1202 16:18:33.807856  632702 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1202 16:18:33.809040  632702 cli_runner.go:164] Run: docker network inspect newest-cni-682353 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 16:18:33.827561  632702 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1202 16:18:33.831887  632702 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 16:18:33.843369  632702 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1202 16:18:33.844353  632702 kubeadm.go:884] updating cluster {Name:newest-cni-682353 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-682353 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 16:18:33.844491  632702 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 16:18:33.844532  632702 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 16:18:33.876407  632702 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 16:18:33.876447  632702 cache_images.go:86] Images are preloaded, skipping loading
	I1202 16:18:33.876456  632702 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0-beta.0 crio true true} ...
	I1202 16:18:33.876576  632702 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-682353 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-682353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 16:18:33.876662  632702 ssh_runner.go:195] Run: crio config
	I1202 16:18:33.925827  632702 cni.go:84] Creating CNI manager for ""
	I1202 16:18:33.925845  632702 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 16:18:33.925860  632702 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1202 16:18:33.925882  632702 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-682353 NodeName:newest-cni-682353 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 16:18:33.926012  632702 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-682353"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 16:18:33.926070  632702 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 16:18:33.934985  632702 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 16:18:33.935068  632702 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 16:18:33.943624  632702 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1202 16:18:33.957253  632702 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 16:18:33.970750  632702 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1202 16:18:33.984335  632702 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1202 16:18:33.988303  632702 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 16:18:33.998545  632702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 16:18:34.079270  632702 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 16:18:34.100703  632702 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353 for IP: 192.168.103.2
	I1202 16:18:34.100731  632702 certs.go:195] generating shared ca certs ...
	I1202 16:18:34.100753  632702 certs.go:227] acquiring lock for ca certs: {Name:mk039ff27816ff98157f54038cc23b17e408fc34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:18:34.100947  632702 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-264555/.minikube/ca.key
	I1202 16:18:34.100993  632702 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.key
	I1202 16:18:34.101002  632702 certs.go:257] generating profile certs ...
	I1202 16:18:34.101098  632702 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/client.key
	I1202 16:18:34.101156  632702 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/apiserver.key.5833a0e0
	I1202 16:18:34.101190  632702 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/proxy-client.key
	I1202 16:18:34.101308  632702 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099.pem (1338 bytes)
	W1202 16:18:34.101340  632702 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099_empty.pem, impossibly tiny 0 bytes
	I1202 16:18:34.101352  632702 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca-key.pem (1679 bytes)
	I1202 16:18:34.101378  632702 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/ca.pem (1082 bytes)
	I1202 16:18:34.101403  632702 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/cert.pem (1123 bytes)
	I1202 16:18:34.101452  632702 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/certs/key.pem (1675 bytes)
	I1202 16:18:34.101498  632702 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem (1708 bytes)
	I1202 16:18:34.102143  632702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 16:18:34.120751  632702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 16:18:34.139730  632702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 16:18:34.161166  632702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 16:18:34.184970  632702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 16:18:34.204608  632702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 16:18:34.222591  632702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 16:18:34.240258  632702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/newest-cni-682353/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 16:18:34.258300  632702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/ssl/certs/2680992.pem --> /usr/share/ca-certificates/2680992.pem (1708 bytes)
	I1202 16:18:34.275854  632702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 16:18:34.294661  632702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-264555/.minikube/certs/268099.pem --> /usr/share/ca-certificates/268099.pem (1338 bytes)
	I1202 16:18:34.314089  632702 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 16:18:34.327008  632702 ssh_runner.go:195] Run: openssl version
	I1202 16:18:34.333045  632702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 16:18:34.341352  632702 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:18:34.344973  632702 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 15:16 /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:18:34.345025  632702 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 16:18:34.380726  632702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 16:18:34.389203  632702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/268099.pem && ln -fs /usr/share/ca-certificates/268099.pem /etc/ssl/certs/268099.pem"
	I1202 16:18:34.398058  632702 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/268099.pem
	I1202 16:18:34.401858  632702 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 15:33 /usr/share/ca-certificates/268099.pem
	I1202 16:18:34.401913  632702 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/268099.pem
	I1202 16:18:34.435630  632702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/268099.pem /etc/ssl/certs/51391683.0"
	I1202 16:18:34.444261  632702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2680992.pem && ln -fs /usr/share/ca-certificates/2680992.pem /etc/ssl/certs/2680992.pem"
	I1202 16:18:34.452730  632702 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2680992.pem
	I1202 16:18:34.456493  632702 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 15:33 /usr/share/ca-certificates/2680992.pem
	I1202 16:18:34.456538  632702 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2680992.pem
	I1202 16:18:34.492029  632702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2680992.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 16:18:34.500458  632702 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 16:18:34.504312  632702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 16:18:34.538255  632702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 16:18:34.572311  632702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 16:18:34.607698  632702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 16:18:34.650354  632702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 16:18:34.697324  632702 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 16:18:34.748663  632702 kubeadm.go:401] StartCluster: {Name:newest-cni-682353 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-682353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 16:18:34.748785  632702 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 16:18:34.748868  632702 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 16:18:34.786306  632702 cri.go:89] found id: "cb312f13091c58bd72d656a9744dbd05f9804fdf67c95588b994fb7a3c8a08b7"
	I1202 16:18:34.786329  632702 cri.go:89] found id: "c8f017ed73870ab02759b08f235ff372e1d39e18f2cba24a7dc958208be38f45"
	I1202 16:18:34.786336  632702 cri.go:89] found id: "637f7511012f268dc11abb2bdb14e8541a010a8282803345662aee9434c58f91"
	I1202 16:18:34.786340  632702 cri.go:89] found id: "0d367e0e69f0e7e85292b0ba7c75a0d708dac3e3ee3b2f01dc0c4ea1736b98fc"
	I1202 16:18:34.786345  632702 cri.go:89] found id: ""
	I1202 16:18:34.786393  632702 ssh_runner.go:195] Run: sudo runc list -f json
	W1202 16:18:34.798709  632702 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T16:18:34Z" level=error msg="open /run/runc: no such file or directory"
	I1202 16:18:34.798788  632702 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 16:18:34.806965  632702 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 16:18:34.806987  632702 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 16:18:34.807039  632702 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 16:18:34.814989  632702 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 16:18:34.815494  632702 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-682353" does not appear in /home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 16:18:34.815654  632702 kubeconfig.go:62] /home/jenkins/minikube-integration/22021-264555/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-682353" cluster setting kubeconfig missing "newest-cni-682353" context setting]
	I1202 16:18:34.816046  632702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/kubeconfig: {Name:mk809d3f43352510256b48d000241cc8ee13f80d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:18:34.817669  632702 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 16:18:34.825062  632702 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1202 16:18:34.825089  632702 kubeadm.go:602] duration metric: took 18.088181ms to restartPrimaryControlPlane
	I1202 16:18:34.825100  632702 kubeadm.go:403] duration metric: took 76.448812ms to StartCluster
	I1202 16:18:34.825116  632702 settings.go:142] acquiring lock: {Name:mkb00b5395affa5a80ee09f21cfed53b1afcd59c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:18:34.825187  632702 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 16:18:34.825704  632702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-264555/kubeconfig: {Name:mk809d3f43352510256b48d000241cc8ee13f80d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 16:18:34.825935  632702 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 16:18:34.826057  632702 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 16:18:34.826146  632702 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-682353"
	I1202 16:18:34.826156  632702 config.go:182] Loaded profile config "newest-cni-682353": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 16:18:34.826169  632702 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-682353"
	W1202 16:18:34.826178  632702 addons.go:248] addon storage-provisioner should already be in state true
	I1202 16:18:34.826183  632702 addons.go:70] Setting dashboard=true in profile "newest-cni-682353"
	I1202 16:18:34.826206  632702 host.go:66] Checking if "newest-cni-682353" exists ...
	I1202 16:18:34.826212  632702 addons.go:239] Setting addon dashboard=true in "newest-cni-682353"
	W1202 16:18:34.826224  632702 addons.go:248] addon dashboard should already be in state true
	I1202 16:18:34.826260  632702 host.go:66] Checking if "newest-cni-682353" exists ...
	I1202 16:18:34.826183  632702 addons.go:70] Setting default-storageclass=true in profile "newest-cni-682353"
	I1202 16:18:34.826304  632702 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-682353"
	I1202 16:18:34.826601  632702 cli_runner.go:164] Run: docker container inspect newest-cni-682353 --format={{.State.Status}}
	I1202 16:18:34.826669  632702 cli_runner.go:164] Run: docker container inspect newest-cni-682353 --format={{.State.Status}}
	I1202 16:18:34.826748  632702 cli_runner.go:164] Run: docker container inspect newest-cni-682353 --format={{.State.Status}}
	I1202 16:18:34.828960  632702 out.go:179] * Verifying Kubernetes components...
	I1202 16:18:34.830217  632702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 16:18:34.851713  632702 addons.go:239] Setting addon default-storageclass=true in "newest-cni-682353"
	W1202 16:18:34.851738  632702 addons.go:248] addon default-storageclass should already be in state true
	I1202 16:18:34.851765  632702 host.go:66] Checking if "newest-cni-682353" exists ...
	I1202 16:18:34.852195  632702 cli_runner.go:164] Run: docker container inspect newest-cni-682353 --format={{.State.Status}}
	I1202 16:18:34.852976  632702 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1202 16:18:34.852977  632702 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 16:18:34.854295  632702 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 16:18:34.854319  632702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 16:18:34.854363  632702 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1202 16:18:34.854372  632702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:18:34.855457  632702 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1202 16:18:34.855481  632702 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1202 16:18:34.855540  632702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:18:34.887571  632702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33265 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/newest-cni-682353/id_rsa Username:docker}
	I1202 16:18:34.889546  632702 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 16:18:34.889575  632702 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 16:18:34.889649  632702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-682353
	I1202 16:18:34.894669  632702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33265 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/newest-cni-682353/id_rsa Username:docker}
	I1202 16:18:34.911859  632702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33265 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/newest-cni-682353/id_rsa Username:docker}
	I1202 16:18:34.989413  632702 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 16:18:35.007363  632702 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1202 16:18:35.007410  632702 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1202 16:18:35.007592  632702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 16:18:35.007876  632702 api_server.go:52] waiting for apiserver process to appear ...
	I1202 16:18:35.007938  632702 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 16:18:35.022994  632702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 16:18:35.023019  632702 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1202 16:18:35.023040  632702 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1202 16:18:35.037128  632702 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1202 16:18:35.037156  632702 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1202 16:18:35.051762  632702 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1202 16:18:35.051783  632702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1202 16:18:35.065496  632702 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1202 16:18:35.065528  632702 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1202 16:18:35.080881  632702 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1202 16:18:35.080914  632702 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1202 16:18:35.094143  632702 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1202 16:18:35.094167  632702 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1202 16:18:35.106720  632702 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1202 16:18:35.106746  632702 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1202 16:18:35.119818  632702 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 16:18:35.119844  632702 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1202 16:18:35.132843  632702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 16:18:37.146337  632702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.138701206s)
	I1202 16:18:37.146386  632702 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.138423904s)
	I1202 16:18:37.146438  632702 api_server.go:72] duration metric: took 2.320441896s to wait for apiserver process to appear ...
	I1202 16:18:37.146447  632702 api_server.go:88] waiting for apiserver healthz status ...
	I1202 16:18:37.146450  632702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.123427863s)
	I1202 16:18:37.146470  632702 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1202 16:18:37.146639  632702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.013762349s)
	I1202 16:18:37.148174  632702 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-682353 addons enable metrics-server
	
	I1202 16:18:37.150970  632702 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 16:18:37.150995  632702 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 16:18:37.159969  632702 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1202 16:18:37.161127  632702 addons.go:530] duration metric: took 2.335078473s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1202 16:18:37.646850  632702 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1202 16:18:37.651399  632702 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 16:18:37.651444  632702 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 16:18:38.147130  632702 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1202 16:18:38.151058  632702 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1202 16:18:38.152035  632702 api_server.go:141] control plane version: v1.35.0-beta.0
	I1202 16:18:38.152059  632702 api_server.go:131] duration metric: took 1.005605082s to wait for apiserver health ...
	I1202 16:18:38.152068  632702 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 16:18:38.155711  632702 system_pods.go:59] 8 kube-system pods found
	I1202 16:18:38.155748  632702 system_pods.go:61] "coredns-7d764666f9-jb9wz" [889f4af6-e976-4ec7-ae6e-ed5ec813fe4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1202 16:18:38.155758  632702 system_pods.go:61] "etcd-newest-cni-682353" [5ab9fd7e-9c55-45a2-ac07-46d797be98d1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 16:18:38.155767  632702 system_pods.go:61] "kindnet-cxfrf" [164fac47-6c74-434b-b780-1ba1c2a40495] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1202 16:18:38.155774  632702 system_pods.go:61] "kube-apiserver-newest-cni-682353" [df312caa-500b-4c0b-bda0-f8acafcff8b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 16:18:38.155782  632702 system_pods.go:61] "kube-controller-manager-newest-cni-682353" [17765d5c-8f15-40da-886f-c807519c7e05] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 16:18:38.155789  632702 system_pods.go:61] "kube-proxy-srq78" [6d9b68b3-fb87-47f4-887a-3b1851999e6c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1202 16:18:38.155794  632702 system_pods.go:61] "kube-scheduler-newest-cni-682353" [6b53974a-1f7e-4d8a-bae6-24aa797c54d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 16:18:38.155800  632702 system_pods.go:61] "storage-provisioner" [c5d388c9-2f39-4c65-8e57-7846b28c1db8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1202 16:18:38.155806  632702 system_pods.go:74] duration metric: took 3.732484ms to wait for pod list to return data ...
	I1202 16:18:38.155815  632702 default_sa.go:34] waiting for default service account to be created ...
	I1202 16:18:38.157998  632702 default_sa.go:45] found service account: "default"
	I1202 16:18:38.158016  632702 default_sa.go:55] duration metric: took 2.195211ms for default service account to be created ...
	I1202 16:18:38.158026  632702 kubeadm.go:587] duration metric: took 3.332049235s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1202 16:18:38.158040  632702 node_conditions.go:102] verifying NodePressure condition ...
	I1202 16:18:38.160245  632702 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 16:18:38.160263  632702 node_conditions.go:123] node cpu capacity is 8
	I1202 16:18:38.160276  632702 node_conditions.go:105] duration metric: took 2.232934ms to run NodePressure ...
	I1202 16:18:38.160287  632702 start.go:242] waiting for startup goroutines ...
	I1202 16:18:38.160293  632702 start.go:247] waiting for cluster config update ...
	I1202 16:18:38.160306  632702 start.go:256] writing updated cluster config ...
	I1202 16:18:38.160581  632702 ssh_runner.go:195] Run: rm -f paused
	I1202 16:18:38.210325  632702 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1202 16:18:38.212415  632702 out.go:179] * Done! kubectl is now configured to use "newest-cni-682353" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.4761285Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.478683679Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=32b9b38d-7a10-4b82-b216-31cc13968154 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.479283522Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=227ebc54-cce0-45d2-8ca2-cd0035b30565 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.480113277Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.480507216Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.480732083Z" level=info msg="Ran pod sandbox 385fd86e3a4791076f858c9038c5eabcf60eece27645dadf02b40e5e1ea2c2b4 with infra container: kube-system/kindnet-cxfrf/POD" id=32b9b38d-7a10-4b82-b216-31cc13968154 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.481189568Z" level=info msg="Ran pod sandbox 7ba89cb394156b553aa049542bef7bc414c4e6b77a966253691edcad3b0f1a56 with infra container: kube-system/kube-proxy-srq78/POD" id=227ebc54-cce0-45d2-8ca2-cd0035b30565 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.481856791Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=a33dd7a2-f06d-49a4-a732-8f699623c167 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.482167702Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=15ebb4be-f87c-4cf5-901a-c52514364122 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.482781659Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=ae835134-3799-44d4-86d0-bb3d433f4f03 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.483013363Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=6d55e3fd-a930-49de-9bd2-c9cf366a464b name=/runtime.v1.ImageService/ImageStatus
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.483879732Z" level=info msg="Creating container: kube-system/kube-proxy-srq78/kube-proxy" id=5832e488-4dd4-459f-8e07-adac26986fa8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.483944604Z" level=info msg="Creating container: kube-system/kindnet-cxfrf/kindnet-cni" id=6892e807-9aa7-4eba-892b-7e0c10bdb3f3 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.483977092Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.484015141Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.487792258Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.488281267Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.488282249Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.488844313Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.516642673Z" level=info msg="Created container 91fc3c64a25e3fda17bac57736962354faa703cde5738eaae25bd35fa4c465c3: kube-system/kindnet-cxfrf/kindnet-cni" id=6892e807-9aa7-4eba-892b-7e0c10bdb3f3 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.517231254Z" level=info msg="Starting container: 91fc3c64a25e3fda17bac57736962354faa703cde5738eaae25bd35fa4c465c3" id=f8e94209-a2e4-4c2d-aa68-b7747892eaf5 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.518916695Z" level=info msg="Started container" PID=1032 containerID=91fc3c64a25e3fda17bac57736962354faa703cde5738eaae25bd35fa4c465c3 description=kube-system/kindnet-cxfrf/kindnet-cni id=f8e94209-a2e4-4c2d-aa68-b7747892eaf5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=385fd86e3a4791076f858c9038c5eabcf60eece27645dadf02b40e5e1ea2c2b4
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.519558707Z" level=info msg="Created container 4e5a1f21c78595fd7dc11cf79c1d7485a73ccb9864314205ffa58d85454752b5: kube-system/kube-proxy-srq78/kube-proxy" id=5832e488-4dd4-459f-8e07-adac26986fa8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.520153669Z" level=info msg="Starting container: 4e5a1f21c78595fd7dc11cf79c1d7485a73ccb9864314205ffa58d85454752b5" id=65c9d276-a9a5-406d-b8a9-8511cd360de1 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 16:18:37 newest-cni-682353 crio[524]: time="2025-12-02T16:18:37.522959845Z" level=info msg="Started container" PID=1033 containerID=4e5a1f21c78595fd7dc11cf79c1d7485a73ccb9864314205ffa58d85454752b5 description=kube-system/kube-proxy-srq78/kube-proxy id=65c9d276-a9a5-406d-b8a9-8511cd360de1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7ba89cb394156b553aa049542bef7bc414c4e6b77a966253691edcad3b0f1a56
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	91fc3c64a25e3       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   5 seconds ago       Running             kindnet-cni               1                   385fd86e3a479       kindnet-cxfrf                               kube-system
	4e5a1f21c7859       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   5 seconds ago       Running             kube-proxy                1                   7ba89cb394156       kube-proxy-srq78                            kube-system
	cb312f13091c5       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   8 seconds ago       Running             etcd                      1                   4c5f6eabab082       etcd-newest-cni-682353                      kube-system
	c8f017ed73870       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   8 seconds ago       Running             kube-apiserver            1                   b41520b6aeb24       kube-apiserver-newest-cni-682353            kube-system
	637f7511012f2       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   8 seconds ago       Running             kube-controller-manager   1                   7548eff638644       kube-controller-manager-newest-cni-682353   kube-system
	0d367e0e69f0e       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   8 seconds ago       Running             kube-scheduler            1                   8d9e341e48798       kube-scheduler-newest-cni-682353            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-682353
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-682353
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=newest-cni-682353
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T16_18_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 16:18:13 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-682353
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 16:18:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 16:18:36 +0000   Tue, 02 Dec 2025 16:18:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 16:18:36 +0000   Tue, 02 Dec 2025 16:18:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 16:18:36 +0000   Tue, 02 Dec 2025 16:18:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 02 Dec 2025 16:18:36 +0000   Tue, 02 Dec 2025 16:18:11 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-682353
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                33d4fe74-dbd2-4001-8121-c4f8c133d3ca
	  Boot ID:                    e00bac56-b076-4861-bc22-5d3b11269f73
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-682353                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         28s
	  kube-system                 kindnet-cxfrf                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      23s
	  kube-system                 kube-apiserver-newest-cni-682353             250m (3%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-controller-manager-newest-cni-682353    200m (2%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-proxy-srq78                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  kube-system                 kube-scheduler-newest-cni-682353             100m (1%)     0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  24s   node-controller  Node newest-cni-682353 event: Registered Node newest-cni-682353 in Controller
	  Normal  RegisteredNode  4s    node-controller  Node newest-cni-682353 event: Registered Node newest-cni-682353 in Controller
	
	
	==> dmesg <==
	[  +0.000023] ll header: 00000000: 1a 77 64 a2 f5 d2 0e 8f 78 59 6a 39 08 00
	[Dec 2 16:14] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ca bc 15 8e 4f 39 08 06
	[  +0.202375] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4a 25 86 21 45 76 08 06
	[  +7.441346] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 50 97 74 77 f9 08 06
	[  +0.000311] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 8c 8a 4d de f7 08 06
	[Dec 2 16:15] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 87 56 d2 46 1b 08 06
	[  +0.000909] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4a 25 86 21 45 76 08 06
	[  +7.449328] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a 06 ef 04 0a 22 08 06
	[ +17.731920] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff ae 8e 5c 48 83 60 08 06
	[  +2.165442] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0e 0b db fb 54 af 08 06
	[  +0.000320] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 3a 06 ef 04 0a 22 08 06
	[ +14.651928] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 5d 2d 15 78 ec 08 06
	[  +0.000385] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 8e 5c 48 83 60 08 06
	
	
	==> etcd [cb312f13091c58bd72d656a9744dbd05f9804fdf67c95588b994fb7a3c8a08b7] <==
	{"level":"warn","ts":"2025-12-02T16:18:36.028135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:18:36.034435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:18:36.047569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:18:36.054454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:18:36.060618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:18:36.067436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:18:36.073876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:18:36.080023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:18:36.086349Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:18:36.093038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:18:36.103573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:18:36.110038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:18:36.116389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:18:36.124219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:18:36.131013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:18:36.137322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:18:36.145170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:18:36.158093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:18:36.164647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:18:36.177602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:18:36.181011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:18:36.188696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:18:36.195263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:18:36.201858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T16:18:36.254332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43106","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 16:18:43 up  3:01,  0 user,  load average: 3.06, 3.86, 2.70
	Linux newest-cni-682353 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [91fc3c64a25e3fda17bac57736962354faa703cde5738eaae25bd35fa4c465c3] <==
	I1202 16:18:37.634100       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 16:18:37.728720       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1202 16:18:37.728871       1 main.go:148] setting mtu 1500 for CNI 
	I1202 16:18:37.728895       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 16:18:37.728919       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T16:18:37Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 16:18:37.928807       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 16:18:37.928988       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 16:18:37.929020       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 16:18:37.929253       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1202 16:18:38.329160       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 16:18:38.329192       1 metrics.go:72] Registering metrics
	I1202 16:18:38.329255       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [c8f017ed73870ab02759b08f235ff372e1d39e18f2cba24a7dc958208be38f45] <==
	I1202 16:18:36.708024       1 autoregister_controller.go:144] Starting autoregister controller
	I1202 16:18:36.708031       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:36.708031       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1202 16:18:36.708080       1 cache.go:39] Caches are synced for autoregister controller
	I1202 16:18:36.708123       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1202 16:18:36.708030       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1202 16:18:36.708211       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1202 16:18:36.708233       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1202 16:18:36.708044       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1202 16:18:36.708531       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:36.713685       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:36.713709       1 policy_source.go:248] refreshing policies
	I1202 16:18:36.714194       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1202 16:18:36.750294       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 16:18:36.956487       1 controller.go:667] quota admission added evaluator for: namespaces
	I1202 16:18:36.984966       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1202 16:18:37.008265       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 16:18:37.016414       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 16:18:37.023474       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 16:18:37.060822       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.240.134"}
	I1202 16:18:37.072032       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.170.34"}
	I1202 16:18:37.610944       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1202 16:18:40.320301       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1202 16:18:40.371491       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 16:18:40.420373       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [637f7511012f268dc11abb2bdb14e8541a010a8282803345662aee9434c58f91] <==
	I1202 16:18:39.882400       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:39.882464       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:39.882551       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:39.882604       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:39.882633       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:39.882653       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:39.883272       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:39.883862       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:39.887508       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:39.887580       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:39.887618       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:39.887744       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:39.888619       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:39.888641       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:39.888675       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:39.888695       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:39.888717       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:39.888729       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:39.888767       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:39.888843       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:39.888859       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1202 16:18:39.888865       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1202 16:18:39.888810       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:39.888736       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:39.979875       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [4e5a1f21c78595fd7dc11cf79c1d7485a73ccb9864314205ffa58d85454752b5] <==
	I1202 16:18:37.555441       1 server_linux.go:53] "Using iptables proxy"
	I1202 16:18:37.619588       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 16:18:37.719733       1 shared_informer.go:377] "Caches are synced"
	I1202 16:18:37.719796       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1202 16:18:37.719912       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 16:18:37.741976       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 16:18:37.742048       1 server_linux.go:136] "Using iptables Proxier"
	I1202 16:18:37.748089       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 16:18:37.748594       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1202 16:18:37.748639       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 16:18:37.750278       1 config.go:309] "Starting node config controller"
	I1202 16:18:37.750354       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 16:18:37.750367       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 16:18:37.750394       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 16:18:37.750399       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 16:18:37.750438       1 config.go:200] "Starting service config controller"
	I1202 16:18:37.750446       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 16:18:37.750467       1 config.go:106] "Starting endpoint slice config controller"
	I1202 16:18:37.750472       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 16:18:37.850579       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1202 16:18:37.850592       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 16:18:37.850613       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [0d367e0e69f0e7e85292b0ba7c75a0d708dac3e3ee3b2f01dc0c4ea1736b98fc] <==
	I1202 16:18:35.067773       1 serving.go:386] Generated self-signed cert in-memory
	W1202 16:18:36.624279       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1202 16:18:36.624310       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1202 16:18:36.624321       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1202 16:18:36.624332       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1202 16:18:36.675284       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1202 16:18:36.675403       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 16:18:36.679679       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1202 16:18:36.679894       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 16:18:36.680763       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 16:18:36.679920       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1202 16:18:36.782620       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 02 16:18:36 newest-cni-682353 kubelet[660]: I1202 16:18:36.743495     660 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 02 16:18:36 newest-cni-682353 kubelet[660]: E1202 16:18:36.783447     660 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-682353\" already exists" pod="kube-system/etcd-newest-cni-682353"
	Dec 02 16:18:36 newest-cni-682353 kubelet[660]: I1202 16:18:36.783490     660 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-682353"
	Dec 02 16:18:36 newest-cni-682353 kubelet[660]: E1202 16:18:36.791283     660 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-682353\" already exists" pod="kube-system/kube-apiserver-newest-cni-682353"
	Dec 02 16:18:36 newest-cni-682353 kubelet[660]: I1202 16:18:36.791329     660 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-682353"
	Dec 02 16:18:36 newest-cni-682353 kubelet[660]: E1202 16:18:36.798589     660 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-682353\" already exists" pod="kube-system/kube-controller-manager-newest-cni-682353"
	Dec 02 16:18:36 newest-cni-682353 kubelet[660]: I1202 16:18:36.798786     660 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-682353"
	Dec 02 16:18:36 newest-cni-682353 kubelet[660]: E1202 16:18:36.805350     660 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-682353\" already exists" pod="kube-system/kube-scheduler-newest-cni-682353"
	Dec 02 16:18:37 newest-cni-682353 kubelet[660]: I1202 16:18:37.165434     660 apiserver.go:52] "Watching apiserver"
	Dec 02 16:18:37 newest-cni-682353 kubelet[660]: E1202 16:18:37.171770     660 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-682353" containerName="kube-controller-manager"
	Dec 02 16:18:37 newest-cni-682353 kubelet[660]: E1202 16:18:37.212132     660 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-682353" containerName="kube-scheduler"
	Dec 02 16:18:37 newest-cni-682353 kubelet[660]: I1202 16:18:37.212173     660 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-682353"
	Dec 02 16:18:37 newest-cni-682353 kubelet[660]: E1202 16:18:37.212352     660 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-682353" containerName="kube-apiserver"
	Dec 02 16:18:37 newest-cni-682353 kubelet[660]: E1202 16:18:37.219326     660 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-682353\" already exists" pod="kube-system/etcd-newest-cni-682353"
	Dec 02 16:18:37 newest-cni-682353 kubelet[660]: E1202 16:18:37.219410     660 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-682353" containerName="etcd"
	Dec 02 16:18:37 newest-cni-682353 kubelet[660]: I1202 16:18:37.272830     660 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 02 16:18:37 newest-cni-682353 kubelet[660]: I1202 16:18:37.273704     660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/164fac47-6c74-434b-b780-1ba1c2a40495-cni-cfg\") pod \"kindnet-cxfrf\" (UID: \"164fac47-6c74-434b-b780-1ba1c2a40495\") " pod="kube-system/kindnet-cxfrf"
	Dec 02 16:18:37 newest-cni-682353 kubelet[660]: I1202 16:18:37.273750     660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/164fac47-6c74-434b-b780-1ba1c2a40495-xtables-lock\") pod \"kindnet-cxfrf\" (UID: \"164fac47-6c74-434b-b780-1ba1c2a40495\") " pod="kube-system/kindnet-cxfrf"
	Dec 02 16:18:37 newest-cni-682353 kubelet[660]: I1202 16:18:37.273798     660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d9b68b3-fb87-47f4-887a-3b1851999e6c-xtables-lock\") pod \"kube-proxy-srq78\" (UID: \"6d9b68b3-fb87-47f4-887a-3b1851999e6c\") " pod="kube-system/kube-proxy-srq78"
	Dec 02 16:18:37 newest-cni-682353 kubelet[660]: I1202 16:18:37.273836     660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/164fac47-6c74-434b-b780-1ba1c2a40495-lib-modules\") pod \"kindnet-cxfrf\" (UID: \"164fac47-6c74-434b-b780-1ba1c2a40495\") " pod="kube-system/kindnet-cxfrf"
	Dec 02 16:18:37 newest-cni-682353 kubelet[660]: I1202 16:18:37.273868     660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d9b68b3-fb87-47f4-887a-3b1851999e6c-lib-modules\") pod \"kube-proxy-srq78\" (UID: \"6d9b68b3-fb87-47f4-887a-3b1851999e6c\") " pod="kube-system/kube-proxy-srq78"
	Dec 02 16:18:38 newest-cni-682353 kubelet[660]: E1202 16:18:38.217510     660 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-682353" containerName="etcd"
	Dec 02 16:18:39 newest-cni-682353 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 02 16:18:39 newest-cni-682353 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 02 16:18:39 newest-cni-682353 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-682353 -n newest-cni-682353
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-682353 -n newest-cni-682353: exit status 2 (335.099261ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-682353 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-7d764666f9-jb9wz storage-provisioner dashboard-metrics-scraper-867fb5f87b-9m4mf kubernetes-dashboard-b84665fb8-vh2pr
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-682353 describe pod coredns-7d764666f9-jb9wz storage-provisioner dashboard-metrics-scraper-867fb5f87b-9m4mf kubernetes-dashboard-b84665fb8-vh2pr
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-682353 describe pod coredns-7d764666f9-jb9wz storage-provisioner dashboard-metrics-scraper-867fb5f87b-9m4mf kubernetes-dashboard-b84665fb8-vh2pr: exit status 1 (63.890595ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-jb9wz" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-9m4mf" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-vh2pr" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-682353 describe pod coredns-7d764666f9-jb9wz storage-provisioner dashboard-metrics-scraper-867fb5f87b-9m4mf kubernetes-dashboard-b84665fb8-vh2pr: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (5.35s)

                                                
                                    

Test pass (334/415)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 12.37
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.2/json-events 9.97
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.08
18 TestDownloadOnly/v1.34.2/DeleteAll 0.24
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.15
21 TestDownloadOnly/v1.35.0-beta.0/json-events 2.94
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.07
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.23
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.17
29 TestDownloadOnlyKic 0.44
30 TestBinaryMirror 0.88
31 TestOffline 50.39
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 101.69
40 TestAddons/serial/GCPAuth/Namespaces 0.15
41 TestAddons/serial/GCPAuth/FakeCredentials 10.45
57 TestAddons/StoppedEnableDisable 16.69
58 TestCertOptions 26.1
59 TestCertExpiration 211.15
61 TestForceSystemdFlag 25.12
62 TestForceSystemdEnv 22.49
67 TestErrorSpam/setup 18.58
68 TestErrorSpam/start 0.71
69 TestErrorSpam/status 1.01
70 TestErrorSpam/pause 5.86
71 TestErrorSpam/unpause 5.56
72 TestErrorSpam/stop 8.15
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 40.47
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 6.3
79 TestFunctional/serial/KubeContext 0.05
80 TestFunctional/serial/KubectlGetPods 0.07
83 TestFunctional/serial/CacheCmd/cache/add_remote 4.91
84 TestFunctional/serial/CacheCmd/cache/add_local 2.4
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
88 TestFunctional/serial/CacheCmd/cache/cache_reload 2.18
89 TestFunctional/serial/CacheCmd/cache/delete 0.13
90 TestFunctional/serial/MinikubeKubectlCmd 0.12
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
92 TestFunctional/serial/ExtraConfig 46.84
93 TestFunctional/serial/ComponentHealth 0.07
94 TestFunctional/serial/LogsCmd 1.25
95 TestFunctional/serial/LogsFileCmd 1.27
96 TestFunctional/serial/InvalidService 4.14
98 TestFunctional/parallel/ConfigCmd 0.49
99 TestFunctional/parallel/DashboardCmd 6.91
100 TestFunctional/parallel/DryRun 0.43
101 TestFunctional/parallel/InternationalLanguage 0.18
102 TestFunctional/parallel/StatusCmd 0.99
107 TestFunctional/parallel/AddonsCmd 0.15
108 TestFunctional/parallel/PersistentVolumeClaim 23.59
110 TestFunctional/parallel/SSHCmd 0.68
111 TestFunctional/parallel/CpCmd 1.85
112 TestFunctional/parallel/MySQL 17.13
113 TestFunctional/parallel/FileSync 0.28
114 TestFunctional/parallel/CertSync 1.69
118 TestFunctional/parallel/NodeLabels 0.06
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.57
122 TestFunctional/parallel/License 0.91
125 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.54
126 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.21
129 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
130 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
135 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
136 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
137 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
138 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
139 TestFunctional/parallel/ImageCommands/ImageBuild 3.98
140 TestFunctional/parallel/ImageCommands/Setup 1.75
145 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
148 TestFunctional/parallel/Version/short 0.07
149 TestFunctional/parallel/Version/components 0.48
150 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
151 TestFunctional/parallel/ProfileCmd/profile_list 0.42
152 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
153 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
154 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
155 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
156 TestFunctional/parallel/MountCmd/any-port 7.15
157 TestFunctional/parallel/MountCmd/specific-port 2.17
158 TestFunctional/parallel/MountCmd/VerifyCleanup 1.96
159 TestFunctional/parallel/ServiceCmd/List 1.71
160 TestFunctional/parallel/ServiceCmd/JSONOutput 1.74
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 47.05
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 7.1
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.05
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.06
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 4.57
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 2.32
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.07
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.3
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 2.15
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.13
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.12
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.12
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 68.35
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 0.07
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.26
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.27
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 4.27
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.49
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 9.86
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.43
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.19
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 1.16
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.15
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 24.52
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.56
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 2.01
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 19.96
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.36
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.67
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.06
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.55
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.94
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.48
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.49
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 6.91
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.42
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.06
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.48
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.23
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.25
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.23
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.23
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 3.39
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.84
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 2.02
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.54
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.86
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.15
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.16
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.17
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.52
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 9.19
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.11
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 1.76
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 1.76
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
261 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
265 TestMultiControlPlane/serial/StartCluster 121.92
266 TestMultiControlPlane/serial/DeployApp 5.17
267 TestMultiControlPlane/serial/PingHostFromPods 1.09
268 TestMultiControlPlane/serial/AddWorkerNode 27.19
269 TestMultiControlPlane/serial/NodeLabels 0.07
270 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.93
271 TestMultiControlPlane/serial/CopyFile 17.79
272 TestMultiControlPlane/serial/StopSecondaryNode 13.87
273 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.76
274 TestMultiControlPlane/serial/RestartSecondaryNode 23.65
275 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.94
276 TestMultiControlPlane/serial/RestartClusterKeepsNodes 112.4
277 TestMultiControlPlane/serial/DeleteSecondaryNode 10.7
278 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.74
279 TestMultiControlPlane/serial/StopCluster 43.69
280 TestMultiControlPlane/serial/RestartCluster 53.1
281 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.76
282 TestMultiControlPlane/serial/AddSecondaryNode 42.98
283 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.95
288 TestJSONOutput/start/Command 35.99
289 TestJSONOutput/start/Audit 0
291 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
292 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
295 TestJSONOutput/pause/Audit 0
297 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
298 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
301 TestJSONOutput/unpause/Audit 0
303 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
304 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
306 TestJSONOutput/stop/Command 6.11
307 TestJSONOutput/stop/Audit 0
309 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
310 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
311 TestErrorJSONOutput 0.25
313 TestKicCustomNetwork/create_custom_network 36.18
314 TestKicCustomNetwork/use_default_bridge_network 26.06
315 TestKicExistingNetwork 23.34
316 TestKicCustomSubnet 23.64
317 TestKicStaticIP 24.28
318 TestMainNoArgs 0.06
319 TestMinikubeProfile 45.62
322 TestMountStart/serial/StartWithMountFirst 7.89
323 TestMountStart/serial/VerifyMountFirst 0.28
324 TestMountStart/serial/StartWithMountSecond 4.97
325 TestMountStart/serial/VerifyMountSecond 0.28
326 TestMountStart/serial/DeleteFirst 1.72
327 TestMountStart/serial/VerifyMountPostDelete 0.28
328 TestMountStart/serial/Stop 1.26
329 TestMountStart/serial/RestartStopped 8.63
330 TestMountStart/serial/VerifyMountPostStop 0.28
333 TestMultiNode/serial/FreshStart2Nodes 63.7
334 TestMultiNode/serial/DeployApp2Nodes 4.23
335 TestMultiNode/serial/PingHostFrom2Pods 0.73
336 TestMultiNode/serial/AddNode 23.88
337 TestMultiNode/serial/MultiNodeLabels 0.06
338 TestMultiNode/serial/ProfileList 0.69
339 TestMultiNode/serial/CopyFile 10.23
340 TestMultiNode/serial/StopNode 2.33
341 TestMultiNode/serial/StartAfterStop 7.36
342 TestMultiNode/serial/RestartKeepsNodes 78.87
343 TestMultiNode/serial/DeleteNode 5.28
344 TestMultiNode/serial/StopMultiNode 28.7
345 TestMultiNode/serial/RestartMultiNode 52.15
346 TestMultiNode/serial/ValidateNameConflict 26.12
351 TestPreload 107.44
353 TestScheduledStopUnix 99.44
356 TestInsufficientStorage 9.47
357 TestRunningBinaryUpgrade 326.85
359 TestKubernetesUpgrade 317.55
360 TestMissingContainerUpgrade 70.84
362 TestStoppedBinaryUpgrade/Setup 3.42
363 TestPause/serial/Start 59.49
364 TestStoppedBinaryUpgrade/Upgrade 311.58
365 TestPause/serial/SecondStartNoReconfiguration 6.52
368 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
369 TestNoKubernetes/serial/StartWithK8s 21.17
370 TestNoKubernetes/serial/StartWithStopK8s 23.32
371 TestNoKubernetes/serial/Start 7.05
372 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
373 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
374 TestNoKubernetes/serial/ProfileList 16.62
375 TestNoKubernetes/serial/Stop 1.28
376 TestNoKubernetes/serial/StartNoArgs 7.84
377 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.31
385 TestNetworkPlugins/group/false 3.7
389 TestStoppedBinaryUpgrade/MinikubeLogs 1.3
397 TestNetworkPlugins/group/auto/Start 38.63
398 TestNetworkPlugins/group/kindnet/Start 41.04
399 TestNetworkPlugins/group/auto/KubeletFlags 0.33
400 TestNetworkPlugins/group/auto/NetCatPod 8.27
401 TestNetworkPlugins/group/auto/DNS 0.15
402 TestNetworkPlugins/group/auto/Localhost 0.13
403 TestNetworkPlugins/group/auto/HairPin 0.11
404 TestNetworkPlugins/group/calico/Start 51.01
405 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
406 TestNetworkPlugins/group/kindnet/KubeletFlags 0.31
407 TestNetworkPlugins/group/kindnet/NetCatPod 9.22
408 TestNetworkPlugins/group/kindnet/DNS 0.12
409 TestNetworkPlugins/group/kindnet/Localhost 0.1
410 TestNetworkPlugins/group/kindnet/HairPin 0.1
411 TestNetworkPlugins/group/custom-flannel/Start 49.88
412 TestNetworkPlugins/group/enable-default-cni/Start 43.78
413 TestNetworkPlugins/group/calico/ControllerPod 6.01
414 TestNetworkPlugins/group/flannel/Start 50.99
415 TestNetworkPlugins/group/calico/KubeletFlags 0.37
416 TestNetworkPlugins/group/calico/NetCatPod 10.9
417 TestNetworkPlugins/group/calico/DNS 0.12
418 TestNetworkPlugins/group/calico/Localhost 0.1
419 TestNetworkPlugins/group/calico/HairPin 0.12
420 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.33
421 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.25
422 TestNetworkPlugins/group/custom-flannel/DNS 0.12
423 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
424 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
425 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.36
426 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.37
427 TestNetworkPlugins/group/bridge/Start 35.36
428 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
429 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
430 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
432 TestStartStop/group/old-k8s-version/serial/FirstStart 54.94
433 TestNetworkPlugins/group/flannel/ControllerPod 6.01
434 TestNetworkPlugins/group/flannel/KubeletFlags 0.36
435 TestNetworkPlugins/group/flannel/NetCatPod 9.23
437 TestStartStop/group/no-preload/serial/FirstStart 47.55
438 TestNetworkPlugins/group/flannel/DNS 0.16
439 TestNetworkPlugins/group/flannel/Localhost 0.15
440 TestNetworkPlugins/group/flannel/HairPin 0.13
441 TestNetworkPlugins/group/bridge/KubeletFlags 0.41
442 TestNetworkPlugins/group/bridge/NetCatPod 8.2
443 TestNetworkPlugins/group/bridge/DNS 0.14
444 TestNetworkPlugins/group/bridge/Localhost 0.12
445 TestNetworkPlugins/group/bridge/HairPin 0.1
447 TestStartStop/group/embed-certs/serial/FirstStart 42.63
448 TestStartStop/group/old-k8s-version/serial/DeployApp 8.27
450 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 38.76
451 TestStartStop/group/no-preload/serial/DeployApp 9.34
453 TestStartStop/group/old-k8s-version/serial/Stop 16.19
455 TestStartStop/group/no-preload/serial/Stop 16.33
456 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
457 TestStartStop/group/old-k8s-version/serial/SecondStart 51.1
458 TestStartStop/group/embed-certs/serial/DeployApp 9.24
459 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.24
460 TestStartStop/group/no-preload/serial/SecondStart 44.87
461 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.26
463 TestStartStop/group/embed-certs/serial/Stop 17.85
465 TestStartStop/group/default-k8s-diff-port/serial/Stop 17.14
466 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
467 TestStartStop/group/embed-certs/serial/SecondStart 49.26
468 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
469 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 51.12
470 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
471 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
472 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
473 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
474 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.32
476 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
479 TestStartStop/group/newest-cni/serial/FirstStart 29.18
480 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
481 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
482 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
483 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
485 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
486 TestStartStop/group/newest-cni/serial/DeployApp 0
488 TestStartStop/group/newest-cni/serial/Stop 2.67
489 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
491 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
492 TestStartStop/group/newest-cni/serial/SecondStart 11.55
493 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
494 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
495 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
x
+
TestDownloadOnly/v1.28.0/json-events (12.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-794731 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-794731 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (12.37131869s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (12.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1202 15:15:25.544725  268099 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1202 15:15:25.544833  268099 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22021-264555/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-794731
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-794731: exit status 85 (78.014148ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-794731 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-794731 │ jenkins │ v1.37.0 │ 02 Dec 25 15:15 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 15:15:13
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 15:15:13.228031  268111 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:15:13.228124  268111 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:15:13.228128  268111 out.go:374] Setting ErrFile to fd 2...
	I1202 15:15:13.228133  268111 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:15:13.228356  268111 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	W1202 15:15:13.228488  268111 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22021-264555/.minikube/config/config.json: open /home/jenkins/minikube-integration/22021-264555/.minikube/config/config.json: no such file or directory
	I1202 15:15:13.228986  268111 out.go:368] Setting JSON to true
	I1202 15:15:13.230019  268111 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7054,"bootTime":1764681459,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 15:15:13.230087  268111 start.go:143] virtualization: kvm guest
	I1202 15:15:13.235456  268111 out.go:99] [download-only-794731] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1202 15:15:13.235683  268111 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22021-264555/.minikube/cache/preloaded-tarball: no such file or directory
	I1202 15:15:13.235689  268111 notify.go:221] Checking for updates...
	I1202 15:15:13.236877  268111 out.go:171] MINIKUBE_LOCATION=22021
	I1202 15:15:13.238069  268111 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 15:15:13.239354  268111 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 15:15:13.240590  268111 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-264555/.minikube
	I1202 15:15:13.241961  268111 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1202 15:15:13.244387  268111 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1202 15:15:13.244777  268111 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 15:15:13.270859  268111 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 15:15:13.270937  268111 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:15:13.330210  268111 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-02 15:15:13.319953332 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:15:13.330442  268111 docker.go:319] overlay module found
	I1202 15:15:13.332269  268111 out.go:99] Using the docker driver based on user configuration
	I1202 15:15:13.332301  268111 start.go:309] selected driver: docker
	I1202 15:15:13.332307  268111 start.go:927] validating driver "docker" against <nil>
	I1202 15:15:13.332408  268111 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:15:13.390802  268111 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-02 15:15:13.380643632 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:15:13.391041  268111 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 15:15:13.391767  268111 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1202 15:15:13.391974  268111 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1202 15:15:13.393673  268111 out.go:171] Using Docker driver with root privileges
	I1202 15:15:13.394899  268111 cni.go:84] Creating CNI manager for ""
	I1202 15:15:13.394961  268111 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 15:15:13.394985  268111 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 15:15:13.395108  268111 start.go:353] cluster config:
	{Name:download-only-794731 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-794731 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 15:15:13.396456  268111 out.go:99] Starting "download-only-794731" primary control-plane node in "download-only-794731" cluster
	I1202 15:15:13.396482  268111 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 15:15:13.397784  268111 out.go:99] Pulling base image v0.0.48-1764169655-21974 ...
	I1202 15:15:13.397846  268111 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1202 15:15:13.397883  268111 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 15:15:13.417870  268111 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b to local cache
	I1202 15:15:13.418071  268111 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory
	I1202 15:15:13.418178  268111 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b to local cache
	I1202 15:15:13.922714  268111 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1202 15:15:13.922752  268111 cache.go:65] Caching tarball of preloaded images
	I1202 15:15:13.922961  268111 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1202 15:15:13.925148  268111 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1202 15:15:13.925175  268111 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1202 15:15:14.018647  268111 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1202 15:15:14.018774  268111 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1202 15:15:17.948532  268111 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b as a tarball
	
	
	* The control-plane node download-only-794731 host does not exist
	  To start a cluster, run: "minikube start -p download-only-794731"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-794731
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (9.97s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-403279 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-403279 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (9.972898232s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (9.97s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1202 15:15:35.984921  268099 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1202 15:15:35.984962  268099 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22021-264555/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-403279
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-403279: exit status 85 (76.306838ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-794731 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-794731 │ jenkins │ v1.37.0 │ 02 Dec 25 15:15 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 02 Dec 25 15:15 UTC │ 02 Dec 25 15:15 UTC │
	│ delete  │ -p download-only-794731                                                                                                                                                   │ download-only-794731 │ jenkins │ v1.37.0 │ 02 Dec 25 15:15 UTC │ 02 Dec 25 15:15 UTC │
	│ start   │ -o=json --download-only -p download-only-403279 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-403279 │ jenkins │ v1.37.0 │ 02 Dec 25 15:15 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 15:15:26
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 15:15:26.065849  268494 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:15:26.066090  268494 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:15:26.066098  268494 out.go:374] Setting ErrFile to fd 2...
	I1202 15:15:26.066102  268494 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:15:26.066321  268494 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 15:15:26.066786  268494 out.go:368] Setting JSON to true
	I1202 15:15:26.067703  268494 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7067,"bootTime":1764681459,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 15:15:26.067760  268494 start.go:143] virtualization: kvm guest
	I1202 15:15:26.069704  268494 out.go:99] [download-only-403279] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 15:15:26.069857  268494 notify.go:221] Checking for updates...
	I1202 15:15:26.071100  268494 out.go:171] MINIKUBE_LOCATION=22021
	I1202 15:15:26.072464  268494 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 15:15:26.073661  268494 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 15:15:26.074767  268494 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-264555/.minikube
	I1202 15:15:26.076051  268494 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1202 15:15:26.078074  268494 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1202 15:15:26.078324  268494 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 15:15:26.102087  268494 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 15:15:26.102212  268494 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:15:26.165118  268494 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:50 SystemTime:2025-12-02 15:15:26.15411442 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:15:26.165219  268494 docker.go:319] overlay module found
	I1202 15:15:26.166826  268494 out.go:99] Using the docker driver based on user configuration
	I1202 15:15:26.166849  268494 start.go:309] selected driver: docker
	I1202 15:15:26.166854  268494 start.go:927] validating driver "docker" against <nil>
	I1202 15:15:26.166940  268494 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:15:26.232292  268494 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:50 SystemTime:2025-12-02 15:15:26.222499064 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:15:26.232479  268494 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 15:15:26.233054  268494 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1202 15:15:26.233226  268494 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1202 15:15:26.234989  268494 out.go:171] Using Docker driver with root privileges
	I1202 15:15:26.236168  268494 cni.go:84] Creating CNI manager for ""
	I1202 15:15:26.236242  268494 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 15:15:26.236256  268494 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 15:15:26.236338  268494 start.go:353] cluster config:
	{Name:download-only-403279 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:download-only-403279 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 15:15:26.237508  268494 out.go:99] Starting "download-only-403279" primary control-plane node in "download-only-403279" cluster
	I1202 15:15:26.237525  268494 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 15:15:26.238577  268494 out.go:99] Pulling base image v0.0.48-1764169655-21974 ...
	I1202 15:15:26.238623  268494 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 15:15:26.238712  268494 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 15:15:26.256683  268494 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b to local cache
	I1202 15:15:26.256879  268494 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory
	I1202 15:15:26.256903  268494 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory, skipping pull
	I1202 15:15:26.256913  268494 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in cache, skipping pull
	I1202 15:15:26.256928  268494 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b as a tarball
	I1202 15:15:27.115929  268494 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1202 15:15:27.115979  268494 cache.go:65] Caching tarball of preloaded images
	I1202 15:15:27.116186  268494 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 15:15:27.117950  268494 out.go:99] Downloading Kubernetes v1.34.2 preload ...
	I1202 15:15:27.117980  268494 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1202 15:15:27.217892  268494 preload.go:295] Got checksum from GCS API "40ac2ac600e3e4b9dc7a3f8c6cb2ed91"
	I1202 15:15:27.217941  268494 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:40ac2ac600e3e4b9dc7a3f8c6cb2ed91 -> /home/jenkins/minikube-integration/22021-264555/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-403279 host does not exist
	  To start a cluster, run: "minikube start -p download-only-403279"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-403279
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (2.94s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-509172 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-509172 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (2.944169068s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (2.94s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
--- PASS: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
--- PASS: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-509172
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-509172: exit status 85 (74.144952ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                       ARGS                                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-794731 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-794731 │ jenkins │ v1.37.0 │ 02 Dec 25 15:15 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 02 Dec 25 15:15 UTC │ 02 Dec 25 15:15 UTC │
	│ delete  │ -p download-only-794731                                                                                                                                                          │ download-only-794731 │ jenkins │ v1.37.0 │ 02 Dec 25 15:15 UTC │ 02 Dec 25 15:15 UTC │
	│ start   │ -o=json --download-only -p download-only-403279 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-403279 │ jenkins │ v1.37.0 │ 02 Dec 25 15:15 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 02 Dec 25 15:15 UTC │ 02 Dec 25 15:15 UTC │
	│ delete  │ -p download-only-403279                                                                                                                                                          │ download-only-403279 │ jenkins │ v1.37.0 │ 02 Dec 25 15:15 UTC │ 02 Dec 25 15:15 UTC │
	│ start   │ -o=json --download-only -p download-only-509172 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-509172 │ jenkins │ v1.37.0 │ 02 Dec 25 15:15 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 15:15:36
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 15:15:36.508702  268871 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:15:36.508803  268871 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:15:36.508808  268871 out.go:374] Setting ErrFile to fd 2...
	I1202 15:15:36.508813  268871 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:15:36.509058  268871 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 15:15:36.509555  268871 out.go:368] Setting JSON to true
	I1202 15:15:36.510456  268871 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7077,"bootTime":1764681459,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 15:15:36.510511  268871 start.go:143] virtualization: kvm guest
	I1202 15:15:36.512198  268871 out.go:99] [download-only-509172] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 15:15:36.512397  268871 notify.go:221] Checking for updates...
	I1202 15:15:36.513481  268871 out.go:171] MINIKUBE_LOCATION=22021
	I1202 15:15:36.514712  268871 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 15:15:36.515915  268871 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 15:15:36.517225  268871 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-264555/.minikube
	I1202 15:15:36.518438  268871 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1202 15:15:36.520667  268871 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1202 15:15:36.520931  268871 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 15:15:36.544243  268871 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 15:15:36.544354  268871 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:15:36.608855  268871 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:50 SystemTime:2025-12-02 15:15:36.598008048 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:15:36.608955  268871 docker.go:319] overlay module found
	I1202 15:15:36.613542  268871 out.go:99] Using the docker driver based on user configuration
	I1202 15:15:36.613573  268871 start.go:309] selected driver: docker
	I1202 15:15:36.613581  268871 start.go:927] validating driver "docker" against <nil>
	I1202 15:15:36.613679  268871 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:15:36.677992  268871 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:50 SystemTime:2025-12-02 15:15:36.668753833 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:15:36.678188  268871 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 15:15:36.678698  268871 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1202 15:15:36.678847  268871 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1202 15:15:36.680389  268871 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-509172 host does not exist
	  To start a cluster, run: "minikube start -p download-only-509172"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-509172
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.17s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.44s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-358841 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-358841" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-358841
--- PASS: TestDownloadOnlyKic (0.44s)

                                                
                                    
x
+
TestBinaryMirror (0.88s)

                                                
                                                
=== RUN   TestBinaryMirror
I1202 15:15:41.421233  268099 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-703402 --alsologtostderr --binary-mirror http://127.0.0.1:46397 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-703402" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-703402
--- PASS: TestBinaryMirror (0.88s)

                                                
                                    
x
+
TestOffline (50.39s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-893562 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-893562 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (47.557435126s)
helpers_test.go:175: Cleaning up "offline-crio-893562" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-893562
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-893562: (2.834771119s)
--- PASS: TestOffline (50.39s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-141726
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-141726: exit status 85 (67.447694ms)

                                                
                                                
-- stdout --
	* Profile "addons-141726" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-141726"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-141726
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-141726: exit status 85 (66.570585ms)

                                                
                                                
-- stdout --
	* Profile "addons-141726" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-141726"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (101.69s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-141726 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-141726 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (1m41.694554063s)
--- PASS: TestAddons/Setup (101.69s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-141726 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-141726 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.45s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-141726 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-141726 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [68f402eb-f188-423f-828c-892475faf6db] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [68f402eb-f188-423f-828c-892475faf6db] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.00339476s
addons_test.go:694: (dbg) Run:  kubectl --context addons-141726 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-141726 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-141726 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.45s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.69s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-141726
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-141726: (16.374844857s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-141726
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-141726
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-141726
--- PASS: TestAddons/StoppedEnableDisable (16.69s)

                                                
                                    
x
+
TestCertOptions (26.1s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-564280 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-564280 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (22.83294499s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-564280 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-564280 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-564280 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-564280" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-564280
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-564280: (2.544680378s)
--- PASS: TestCertOptions (26.10s)

                                                
                                    
x
+
TestCertExpiration (211.15s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-208422 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
E1202 16:11:16.038371  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-298630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-208422 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (21.882439993s)
E1202 16:12:07.750313  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-208422 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-208422 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (6.273632083s)
helpers_test.go:175: Cleaning up "cert-expiration-208422" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-208422
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-208422: (2.998036868s)
--- PASS: TestCertExpiration (211.15s)

                                                
                                    
x
+
TestForceSystemdFlag (25.12s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-538066 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1202 16:12:24.681098  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-538066 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (22.208276053s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-538066 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-538066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-538066
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-538066: (2.612086969s)
--- PASS: TestForceSystemdFlag (25.12s)

                                                
                                    
x
+
TestForceSystemdEnv (22.49s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-577385 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-577385 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (20.08085838s)
helpers_test.go:175: Cleaning up "force-systemd-env-577385" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-577385
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-577385: (2.411814503s)
--- PASS: TestForceSystemdEnv (22.49s)

                                                
                                    
x
+
TestErrorSpam/setup (18.58s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-605221 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-605221 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-605221 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-605221 --driver=docker  --container-runtime=crio: (18.575664309s)
--- PASS: TestErrorSpam/setup (18.58s)

                                                
                                    
x
+
TestErrorSpam/start (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-605221 --log_dir /tmp/nospam-605221 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-605221 --log_dir /tmp/nospam-605221 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-605221 --log_dir /tmp/nospam-605221 start --dry-run
--- PASS: TestErrorSpam/start (0.71s)

                                                
                                    
x
+
TestErrorSpam/status (1.01s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-605221 --log_dir /tmp/nospam-605221 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-605221 --log_dir /tmp/nospam-605221 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-605221 --log_dir /tmp/nospam-605221 status
--- PASS: TestErrorSpam/status (1.01s)

                                                
                                    
x
+
TestErrorSpam/pause (5.86s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-605221 --log_dir /tmp/nospam-605221 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-605221 --log_dir /tmp/nospam-605221 pause: exit status 80 (2.145958952s)

                                                
                                                
-- stdout --
	* Pausing node nospam-605221 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T15:21:00Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-605221 --log_dir /tmp/nospam-605221 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-605221 --log_dir /tmp/nospam-605221 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-605221 --log_dir /tmp/nospam-605221 pause: exit status 80 (1.68007164s)

                                                
                                                
-- stdout --
	* Pausing node nospam-605221 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T15:21:02Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-605221 --log_dir /tmp/nospam-605221 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-605221 --log_dir /tmp/nospam-605221 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-605221 --log_dir /tmp/nospam-605221 pause: exit status 80 (2.034481643s)

                                                
                                                
-- stdout --
	* Pausing node nospam-605221 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T15:21:04Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-605221 --log_dir /tmp/nospam-605221 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.86s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.56s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-605221 --log_dir /tmp/nospam-605221 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-605221 --log_dir /tmp/nospam-605221 unpause: exit status 80 (1.892691334s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-605221 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T15:21:06Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-605221 --log_dir /tmp/nospam-605221 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-605221 --log_dir /tmp/nospam-605221 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-605221 --log_dir /tmp/nospam-605221 unpause: exit status 80 (1.845009879s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-605221 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T15:21:08Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-605221 --log_dir /tmp/nospam-605221 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-605221 --log_dir /tmp/nospam-605221 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-605221 --log_dir /tmp/nospam-605221 unpause: exit status 80 (1.826120994s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-605221 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T15:21:10Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-605221 --log_dir /tmp/nospam-605221 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.56s)

                                                
                                    
x
+
TestErrorSpam/stop (8.15s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-605221 --log_dir /tmp/nospam-605221 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-605221 --log_dir /tmp/nospam-605221 stop: (7.939646065s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-605221 --log_dir /tmp/nospam-605221 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-605221 --log_dir /tmp/nospam-605221 stop
--- PASS: TestErrorSpam/stop (8.15s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/test/nested/copy/268099/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (40.47s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-298630 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-298630 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (40.472132234s)
--- PASS: TestFunctional/serial/StartWithProxy (40.47s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.3s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1202 15:22:02.508326  268099 config.go:182] Loaded profile config "functional-298630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-298630 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-298630 --alsologtostderr -v=8: (6.303651168s)
functional_test.go:678: soft start took 6.304410145s for "functional-298630" cluster.
I1202 15:22:08.812352  268099 config.go:182] Loaded profile config "functional-298630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (6.30s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-298630 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.91s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-298630 cache add registry.k8s.io/pause:3.1: (1.610596788s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-298630 cache add registry.k8s.io/pause:3.3: (1.655196691s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-298630 cache add registry.k8s.io/pause:latest: (1.642038045s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.91s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-298630 /tmp/TestFunctionalserialCacheCmdcacheadd_local2320361951/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 cache add minikube-local-cache-test:functional-298630
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-298630 cache add minikube-local-cache-test:functional-298630: (2.030118066s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 cache delete minikube-local-cache-test:functional-298630
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-298630
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.40s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-298630 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (294.518471ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-298630 cache reload: (1.276911394s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 kubectl -- --context functional-298630 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-298630 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (46.84s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-298630 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1202 15:22:24.682204  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:22:24.688612  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:22:24.700019  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:22:24.721536  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:22:24.763080  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:22:24.844583  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:22:25.006202  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:22:25.327964  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:22:25.970068  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:22:27.251720  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:22:29.814569  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:22:34.936147  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:22:45.178230  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:23:05.660020  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-298630 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (46.840729054s)
functional_test.go:776: restart took 46.84086288s for "functional-298630" cluster.
I1202 15:23:06.047820  268099 config.go:182] Loaded profile config "functional-298630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (46.84s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-298630 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-298630 logs: (1.247298039s)
--- PASS: TestFunctional/serial/LogsCmd (1.25s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 logs --file /tmp/TestFunctionalserialLogsFileCmd429862398/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-298630 logs --file /tmp/TestFunctionalserialLogsFileCmd429862398/001/logs.txt: (1.269542613s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.27s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.14s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-298630 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-298630
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-298630: exit status 115 (356.822384ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31047 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-298630 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.14s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-298630 config get cpus: exit status 14 (108.198302ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-298630 config get cpus: exit status 14 (79.942372ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-298630 --alsologtostderr -v=1]
E1202 15:23:46.622181  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-298630 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 308042: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.91s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-298630 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-298630 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (189.283892ms)

                                                
                                                
-- stdout --
	* [functional-298630] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22021
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22021-264555/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-264555/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 15:23:37.096834  305392 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:23:37.097146  305392 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:23:37.097159  305392 out.go:374] Setting ErrFile to fd 2...
	I1202 15:23:37.097166  305392 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:23:37.097471  305392 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 15:23:37.097947  305392 out.go:368] Setting JSON to false
	I1202 15:23:37.098987  305392 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7558,"bootTime":1764681459,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 15:23:37.099063  305392 start.go:143] virtualization: kvm guest
	I1202 15:23:37.101518  305392 out.go:179] * [functional-298630] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 15:23:37.102814  305392 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 15:23:37.102873  305392 notify.go:221] Checking for updates...
	I1202 15:23:37.105307  305392 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 15:23:37.106755  305392 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 15:23:37.107962  305392 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-264555/.minikube
	I1202 15:23:37.109288  305392 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 15:23:37.110545  305392 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 15:23:37.112241  305392 config.go:182] Loaded profile config "functional-298630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 15:23:37.112929  305392 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 15:23:37.141314  305392 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 15:23:37.141523  305392 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:23:37.215222  305392 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:56 SystemTime:2025-12-02 15:23:37.203916101 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:23:37.215327  305392 docker.go:319] overlay module found
	I1202 15:23:37.216610  305392 out.go:179] * Using the docker driver based on existing profile
	I1202 15:23:37.217675  305392 start.go:309] selected driver: docker
	I1202 15:23:37.217694  305392 start.go:927] validating driver "docker" against &{Name:functional-298630 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-298630 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 15:23:37.217794  305392 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 15:23:37.219401  305392 out.go:203] 
	W1202 15:23:37.220500  305392 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1202 15:23:37.221524  305392 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-298630 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-298630 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-298630 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (183.605991ms)

                                                
                                                
-- stdout --
	* [functional-298630] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22021
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22021-264555/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-264555/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 15:23:37.529700  305615 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:23:37.529792  305615 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:23:37.529797  305615 out.go:374] Setting ErrFile to fd 2...
	I1202 15:23:37.529801  305615 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:23:37.530135  305615 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 15:23:37.530643  305615 out.go:368] Setting JSON to false
	I1202 15:23:37.531728  305615 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7559,"bootTime":1764681459,"procs":240,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 15:23:37.531797  305615 start.go:143] virtualization: kvm guest
	I1202 15:23:37.533867  305615 out.go:179] * [functional-298630] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1202 15:23:37.534976  305615 notify.go:221] Checking for updates...
	I1202 15:23:37.535009  305615 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 15:23:37.536093  305615 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 15:23:37.537172  305615 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 15:23:37.538312  305615 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-264555/.minikube
	I1202 15:23:37.539556  305615 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 15:23:37.543961  305615 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 15:23:37.545459  305615 config.go:182] Loaded profile config "functional-298630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 15:23:37.546009  305615 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 15:23:37.570480  305615 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 15:23:37.570660  305615 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:23:37.637977  305615 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:56 SystemTime:2025-12-02 15:23:37.626838701 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:23:37.638079  305615 docker.go:319] overlay module found
	I1202 15:23:37.639932  305615 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1202 15:23:37.641110  305615 start.go:309] selected driver: docker
	I1202 15:23:37.641139  305615 start.go:927] validating driver "docker" against &{Name:functional-298630 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-298630 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 15:23:37.641219  305615 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 15:23:37.643151  305615 out.go:203] 
	W1202 15:23:37.644488  305615 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1202 15:23:37.645719  305615 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (23.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [da0452d2-df26-4001-8fd0-0cb47ea38a30] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003746012s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-298630 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-298630 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-298630 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-298630 apply -f testdata/storage-provisioner/pod.yaml
I1202 15:23:18.812172  268099 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [61c5b6cb-5e3a-4b13-b0ce-779c3504e617] Pending
helpers_test.go:352: "sp-pod" [61c5b6cb-5e3a-4b13-b0ce-779c3504e617] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [61c5b6cb-5e3a-4b13-b0ce-779c3504e617] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.004348195s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-298630 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-298630 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-298630 apply -f testdata/storage-provisioner/pod.yaml
I1202 15:23:29.948667  268099 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [5f6d17ff-dd61-4324-8155-c0a2f3d88d99] Pending
helpers_test.go:352: "sp-pod" [5f6d17ff-dd61-4324-8155-c0a2f3d88d99] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [5f6d17ff-dd61-4324-8155-c0a2f3d88d99] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003478664s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-298630 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (23.59s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 ssh -n functional-298630 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 cp functional-298630:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3908203378/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 ssh -n functional-298630 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 ssh -n functional-298630 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (17.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-298630 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-mz28z" [61936812-ca6b-4185-be9d-866209a8582f] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-mz28z" [61936812-ca6b-4185-be9d-866209a8582f] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 14.004169902s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-298630 exec mysql-5bb876957f-mz28z -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-298630 exec mysql-5bb876957f-mz28z -- mysql -ppassword -e "show databases;": exit status 1 (104.733381ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1202 15:23:52.194125  268099 retry.go:31] will retry after 1.377301097s: exit status 1
2025/12/02 15:23:53 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1812: (dbg) Run:  kubectl --context functional-298630 exec mysql-5bb876957f-mz28z -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-298630 exec mysql-5bb876957f-mz28z -- mysql -ppassword -e "show databases;": exit status 1 (120.647022ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1202 15:23:53.693341  268099 retry.go:31] will retry after 1.283185428s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-298630 exec mysql-5bb876957f-mz28z -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (17.13s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/268099/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 ssh "sudo cat /etc/test/nested/copy/268099/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/268099.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 ssh "sudo cat /etc/ssl/certs/268099.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/268099.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 ssh "sudo cat /usr/share/ca-certificates/268099.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2680992.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 ssh "sudo cat /etc/ssl/certs/2680992.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2680992.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 ssh "sudo cat /usr/share/ca-certificates/2680992.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-298630 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-298630 ssh "sudo systemctl is-active docker": exit status 1 (285.573769ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-298630 ssh "sudo systemctl is-active containerd": exit status 1 (285.774171ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-298630 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-298630 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-298630 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-298630 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 301053: os: process already finished
helpers_test.go:525: unable to kill pid 300740: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-298630 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-298630 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [f76dfdf1-be6c-4e40-b58d-44e44923b391] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [f76dfdf1-be6c-4e40-b58d-44e44923b391] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003836849s
I1202 15:23:23.013206  268099 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-298630 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.101.78.94 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-298630 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-298630 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-298630 image ls --format short --alsologtostderr:
I1202 15:23:53.594007  308244 out.go:360] Setting OutFile to fd 1 ...
I1202 15:23:53.594124  308244 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:23:53.594136  308244 out.go:374] Setting ErrFile to fd 2...
I1202 15:23:53.594143  308244 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:23:53.594460  308244 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
I1202 15:23:53.595256  308244 config.go:182] Loaded profile config "functional-298630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 15:23:53.595448  308244 config.go:182] Loaded profile config "functional-298630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 15:23:53.596157  308244 cli_runner.go:164] Run: docker container inspect functional-298630 --format={{.State.Status}}
I1202 15:23:53.620811  308244 ssh_runner.go:195] Run: systemctl --version
I1202 15:23:53.620884  308244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-298630
I1202 15:23:53.643985  308244 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/functional-298630/id_rsa Username:docker}
I1202 15:23:53.754638  308244 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-298630 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 01e8bacf0f500 │ 76MB   │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/nginx                 │ latest             │ 60adc2e137e75 │ 155MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ a5f569d49a979 │ 89MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 88320b5498ff2 │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ docker.io/library/nginx                 │ alpine             │ d4918ca78576a │ 54.3MB │
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 8aa150647e88a │ 73.1MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-298630 image ls --format table --alsologtostderr:
I1202 15:23:54.366196  308465 out.go:360] Setting OutFile to fd 1 ...
I1202 15:23:54.366290  308465 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:23:54.366295  308465 out.go:374] Setting ErrFile to fd 2...
I1202 15:23:54.366299  308465 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:23:54.366491  308465 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
I1202 15:23:54.367047  308465 config.go:182] Loaded profile config "functional-298630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 15:23:54.367137  308465 config.go:182] Loaded profile config "functional-298630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 15:23:54.367601  308465 cli_runner.go:164] Run: docker container inspect functional-298630 --format={{.State.Status}}
I1202 15:23:54.385850  308465 ssh_runner.go:195] Run: systemctl --version
I1202 15:23:54.385899  308465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-298630
I1202 15:23:54.403813  308465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/functional-298630/id_rsa Username:docker}
I1202 15:23:54.502502  308465 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-298630 image ls --format json --alsologtostderr:
[{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541"],"repoTags":["docker.io/library/nginx:latest"],"size":"155491845"},{"id":"a5f569d49a979d9f62c7
42edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077","registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"89046001"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0ff
f4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54252718"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"7610
3547"},{"id":"01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb","registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"76004183"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"07655ddf2eebe5d250f7a72c25
f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"a3e246e9556e93d71
e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":["registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"73145240"},{"id":"88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6","registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda"],"repoTags":["registry.k8s.io/kube-scheduler:v1.
34.2"],"size":"53848919"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-298630 image ls --format json --alsologtostderr:
I1202 15:23:54.137601  308400 out.go:360] Setting OutFile to fd 1 ...
I1202 15:23:54.137713  308400 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:23:54.137723  308400 out.go:374] Setting ErrFile to fd 2...
I1202 15:23:54.137731  308400 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:23:54.137966  308400 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
I1202 15:23:54.138677  308400 config.go:182] Loaded profile config "functional-298630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 15:23:54.138826  308400 config.go:182] Loaded profile config "functional-298630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 15:23:54.139383  308400 cli_runner.go:164] Run: docker container inspect functional-298630 --format={{.State.Status}}
I1202 15:23:54.159719  308400 ssh_runner.go:195] Run: systemctl --version
I1202 15:23:54.159779  308400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-298630
I1202 15:23:54.178139  308400 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/functional-298630/id_rsa Username:docker}
I1202 15:23:54.277175  308400 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-298630 image ls --format yaml --alsologtostderr:
- id: 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
- registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "53848919"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541
repoTags:
- docker.io/library/nginx:latest
size: "155491845"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests:
- registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "73145240"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54252718"
- id: a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
- registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "89046001"
- id: 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
- registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "76004183"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-298630 image ls --format yaml --alsologtostderr:
I1202 15:23:53.864111  308314 out.go:360] Setting OutFile to fd 1 ...
I1202 15:23:53.865132  308314 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:23:53.865149  308314 out.go:374] Setting ErrFile to fd 2...
I1202 15:23:53.865158  308314 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:23:53.865911  308314 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
I1202 15:23:53.867117  308314 config.go:182] Loaded profile config "functional-298630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 15:23:53.867261  308314 config.go:182] Loaded profile config "functional-298630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 15:23:53.867931  308314 cli_runner.go:164] Run: docker container inspect functional-298630 --format={{.State.Status}}
I1202 15:23:53.890655  308314 ssh_runner.go:195] Run: systemctl --version
I1202 15:23:53.890716  308314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-298630
I1202 15:23:53.913035  308314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/functional-298630/id_rsa Username:docker}
I1202 15:23:54.022037  308314 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-298630 ssh pgrep buildkitd: exit status 1 (311.595244ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 image build -t localhost/my-image:functional-298630 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-298630 image build -t localhost/my-image:functional-298630 testdata/build --alsologtostderr: (3.443272218s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-298630 image build -t localhost/my-image:functional-298630 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> cb9ee8d47e0
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-298630
--> 517d50b6c29
Successfully tagged localhost/my-image:functional-298630
517d50b6c29decc402d1dff7733eb2eb2a4aee7428bf289bf916243a1b132e0a
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-298630 image build -t localhost/my-image:functional-298630 testdata/build --alsologtostderr:
I1202 15:23:54.910183  308650 out.go:360] Setting OutFile to fd 1 ...
I1202 15:23:54.910545  308650 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:23:54.910557  308650 out.go:374] Setting ErrFile to fd 2...
I1202 15:23:54.910561  308650 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:23:54.910819  308650 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
I1202 15:23:54.911460  308650 config.go:182] Loaded profile config "functional-298630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 15:23:54.912337  308650 config.go:182] Loaded profile config "functional-298630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 15:23:54.912796  308650 cli_runner.go:164] Run: docker container inspect functional-298630 --format={{.State.Status}}
I1202 15:23:54.934451  308650 ssh_runner.go:195] Run: systemctl --version
I1202 15:23:54.934512  308650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-298630
I1202 15:23:54.952328  308650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/functional-298630/id_rsa Username:docker}
I1202 15:23:55.052976  308650 build_images.go:162] Building image from path: /tmp/build.4129808464.tar
I1202 15:23:55.053051  308650 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1202 15:23:55.061247  308650 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4129808464.tar
I1202 15:23:55.065467  308650 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4129808464.tar: stat -c "%s %y" /var/lib/minikube/build/build.4129808464.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4129808464.tar': No such file or directory
I1202 15:23:55.065497  308650 ssh_runner.go:362] scp /tmp/build.4129808464.tar --> /var/lib/minikube/build/build.4129808464.tar (3072 bytes)
I1202 15:23:55.083583  308650 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4129808464
I1202 15:23:55.092151  308650 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4129808464 -xf /var/lib/minikube/build/build.4129808464.tar
I1202 15:23:55.101352  308650 crio.go:315] Building image: /var/lib/minikube/build/build.4129808464
I1202 15:23:55.101432  308650 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-298630 /var/lib/minikube/build/build.4129808464 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1202 15:23:58.270083  308650 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-298630 /var/lib/minikube/build/build.4129808464 --cgroup-manager=cgroupfs: (3.168625846s)
I1202 15:23:58.270156  308650 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4129808464
I1202 15:23:58.278405  308650 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4129808464.tar
I1202 15:23:58.286414  308650 build_images.go:218] Built localhost/my-image:functional-298630 from /tmp/build.4129808464.tar
I1202 15:23:58.286473  308650 build_images.go:134] succeeded building to: functional-298630
I1202 15:23:58.286477  308650 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 image ls
E1202 15:25:08.544285  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:27:24.680168  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:27:52.385980  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:32:24.681100  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.726038963s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-298630
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 image rm kicbase/echo-server:functional-298630 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "352.618804ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "67.728447ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "356.242993ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "61.643714ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-298630 /tmp/TestFunctionalparallelMountCmdany-port2835191475/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1764689015334006256" to /tmp/TestFunctionalparallelMountCmdany-port2835191475/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1764689015334006256" to /tmp/TestFunctionalparallelMountCmdany-port2835191475/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1764689015334006256" to /tmp/TestFunctionalparallelMountCmdany-port2835191475/001/test-1764689015334006256
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-298630 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (296.591319ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 15:23:35.630932  268099 retry.go:31] will retry after 579.901198ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  2 15:23 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  2 15:23 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  2 15:23 test-1764689015334006256
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 ssh cat /mount-9p/test-1764689015334006256
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-298630 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [e5af24b2-be0e-48a7-90fb-934ee81496b0] Pending
helpers_test.go:352: "busybox-mount" [e5af24b2-be0e-48a7-90fb-934ee81496b0] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [e5af24b2-be0e-48a7-90fb-934ee81496b0] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [e5af24b2-be0e-48a7-90fb-934ee81496b0] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003851851s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-298630 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-298630 /tmp/TestFunctionalparallelMountCmdany-port2835191475/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-298630 /tmp/TestFunctionalparallelMountCmdspecific-port3446947167/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-298630 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (379.20549ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 15:23:42.865603  268099 retry.go:31] will retry after 543.315322ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-298630 /tmp/TestFunctionalparallelMountCmdspecific-port3446947167/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-298630 ssh "sudo umount -f /mount-9p": exit status 1 (344.388348ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-298630 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-298630 /tmp/TestFunctionalparallelMountCmdspecific-port3446947167/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-298630 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1948369065/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-298630 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1948369065/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-298630 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1948369065/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-298630 ssh "findmnt -T" /mount1: exit status 1 (419.294436ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 15:23:45.083579  268099 retry.go:31] will retry after 569.254386ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-298630 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-298630 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1948369065/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-298630 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1948369065/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-298630 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1948369065/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-298630 service list: (1.711756721s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-298630 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-298630 service list -o json: (1.743118369s)
functional_test.go:1504: Took "1.743232395s" to run "out/minikube-linux-amd64 -p functional-298630 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.74s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-298630
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-298630
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-298630
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22021-264555/.minikube/files/etc/test/nested/copy/268099/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (47.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-310311 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-310311 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (47.045639646s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (47.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (7.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1202 15:34:11.286982  268099 config.go:182] Loaded profile config "functional-310311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-310311 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-310311 --alsologtostderr -v=8: (7.103821527s)
functional_test.go:678: soft start took 7.10421422s for "functional-310311" cluster.
I1202 15:34:18.391159  268099 config.go:182] Loaded profile config "functional-310311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (7.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-310311 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (4.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-310311 cache add registry.k8s.io/pause:3.1: (1.49888281s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-310311 cache add registry.k8s.io/pause:3.3: (1.612638109s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-310311 cache add registry.k8s.io/pause:latest: (1.45845479s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (4.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (2.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-310311 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach2036523889/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 cache add minikube-local-cache-test:functional-310311
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-310311 cache add minikube-local-cache-test:functional-310311: (2.018230985s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 cache delete minikube-local-cache-test:functional-310311
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-310311
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (2.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (2.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-310311 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (291.432939ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-310311 cache reload: (1.239626755s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (2.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 kubectl -- --context functional-310311 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-310311 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (68.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-310311 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-310311 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m8.34837297s)
functional_test.go:776: restart took 1m8.348528328s for "functional-310311" cluster.
I1202 15:35:36.684549  268099 config.go:182] Loaded profile config "functional-310311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (68.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-310311 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-310311 logs: (1.255384849s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs1882706666/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-310311 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs1882706666/001/logs.txt: (1.265191856s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-310311 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-310311
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-310311: exit status 115 (355.518078ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30463 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-310311 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-310311 config get cpus: exit status 14 (89.802768ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-310311 config get cpus: exit status 14 (77.84662ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (9.86s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-310311 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-310311 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 328200: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (9.86s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-310311 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-310311 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (176.787899ms)

                                                
                                                
-- stdout --
	* [functional-310311] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22021
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22021-264555/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-264555/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 15:35:45.488406  324931 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:35:45.488658  324931 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:35:45.488666  324931 out.go:374] Setting ErrFile to fd 2...
	I1202 15:35:45.488670  324931 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:35:45.488868  324931 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 15:35:45.489273  324931 out.go:368] Setting JSON to false
	I1202 15:35:45.490208  324931 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8286,"bootTime":1764681459,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 15:35:45.490264  324931 start.go:143] virtualization: kvm guest
	I1202 15:35:45.492080  324931 out.go:179] * [functional-310311] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 15:35:45.493285  324931 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 15:35:45.493268  324931 notify.go:221] Checking for updates...
	I1202 15:35:45.494545  324931 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 15:35:45.495924  324931 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 15:35:45.497170  324931 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-264555/.minikube
	I1202 15:35:45.498469  324931 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 15:35:45.499639  324931 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 15:35:45.501253  324931 config.go:182] Loaded profile config "functional-310311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 15:35:45.501888  324931 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 15:35:45.525169  324931 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 15:35:45.525256  324931 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:35:45.595476  324931 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:56 SystemTime:2025-12-02 15:35:45.58250934 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:35:45.595585  324931 docker.go:319] overlay module found
	I1202 15:35:45.597401  324931 out.go:179] * Using the docker driver based on existing profile
	I1202 15:35:45.599616  324931 start.go:309] selected driver: docker
	I1202 15:35:45.599642  324931 start.go:927] validating driver "docker" against &{Name:functional-310311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-310311 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 15:35:45.599718  324931 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 15:35:45.601483  324931 out.go:203] 
	W1202 15:35:45.603319  324931 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1202 15:35:45.604568  324931 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-310311 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-310311 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-310311 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (188.037787ms)

                                                
                                                
-- stdout --
	* [functional-310311] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22021
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22021-264555/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-264555/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 15:35:45.921967  325246 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:35:45.922074  325246 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:35:45.922082  325246 out.go:374] Setting ErrFile to fd 2...
	I1202 15:35:45.922086  325246 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:35:45.922399  325246 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 15:35:45.922828  325246 out.go:368] Setting JSON to false
	I1202 15:35:45.923755  325246 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8287,"bootTime":1764681459,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 15:35:45.923828  325246 start.go:143] virtualization: kvm guest
	I1202 15:35:45.925648  325246 out.go:179] * [functional-310311] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1202 15:35:45.926834  325246 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 15:35:45.926857  325246 notify.go:221] Checking for updates...
	I1202 15:35:45.929135  325246 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 15:35:45.930180  325246 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 15:35:45.931240  325246 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-264555/.minikube
	I1202 15:35:45.932331  325246 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 15:35:45.933357  325246 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 15:35:45.935205  325246 config.go:182] Loaded profile config "functional-310311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 15:35:45.935972  325246 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 15:35:45.962175  325246 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 15:35:45.962296  325246 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:35:46.033246  325246 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:56 SystemTime:2025-12-02 15:35:46.022321101 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:35:46.033347  325246 docker.go:319] overlay module found
	I1202 15:35:46.035171  325246 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1202 15:35:46.036392  325246 start.go:309] selected driver: docker
	I1202 15:35:46.036414  325246 start.go:927] validating driver "docker" against &{Name:functional-310311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-310311 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 15:35:46.036544  325246 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 15:35:46.038193  325246 out.go:203] 
	W1202 15:35:46.039333  325246 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1202 15:35:46.040569  325246 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (1.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (1.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (24.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [e90555d6-46a4-4d9b-a668-9a9a9fa5926c] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003463217s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-310311 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-310311 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-310311 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-310311 apply -f testdata/storage-provisioner/pod.yaml
I1202 15:36:10.748922  268099 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [907681e3-02df-43a3-bd29-0d9dcb92c2f4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [907681e3-02df-43a3-bd29-0d9dcb92c2f4] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.003715912s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-310311 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-310311 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-310311 apply -f testdata/storage-provisioner/pod.yaml
I1202 15:36:21.788176  268099 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [e949d6fa-1899-40ca-b381-91398acfb457] Pending
helpers_test.go:352: "sp-pod" [e949d6fa-1899-40ca-b381-91398acfb457] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [e949d6fa-1899-40ca-b381-91398acfb457] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003904249s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-310311 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (24.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 ssh -n functional-310311 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 cp functional-310311:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp3574966686/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 ssh -n functional-310311 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 ssh -n functional-310311 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (19.96s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-310311 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-844cf969f6-jcmq2" [98c4c5cd-eed6-4d9b-95b6-db9c612c2918] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
2025/12/02 15:36:03 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:352: "mysql-844cf969f6-jcmq2" [98c4c5cd-eed6-4d9b-95b6-db9c612c2918] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: app=mysql healthy within 17.003252934s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-310311 exec mysql-844cf969f6-jcmq2 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-310311 exec mysql-844cf969f6-jcmq2 -- mysql -ppassword -e "show databases;": exit status 1 (124.808509ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1202 15:36:13.392759  268099 retry.go:31] will retry after 624.248774ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-310311 exec mysql-844cf969f6-jcmq2 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-310311 exec mysql-844cf969f6-jcmq2 -- mysql -ppassword -e "show databases;": exit status 1 (120.273272ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1202 15:36:14.138229  268099 retry.go:31] will retry after 1.781885151s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-310311 exec mysql-844cf969f6-jcmq2 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (19.96s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/268099/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 ssh "sudo cat /etc/test/nested/copy/268099/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/268099.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 ssh "sudo cat /etc/ssl/certs/268099.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/268099.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 ssh "sudo cat /usr/share/ca-certificates/268099.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2680992.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 ssh "sudo cat /etc/ssl/certs/2680992.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2680992.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 ssh "sudo cat /usr/share/ca-certificates/2680992.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-310311 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-310311 ssh "sudo systemctl is-active docker": exit status 1 (274.23109ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-310311 ssh "sudo systemctl is-active containerd": exit status 1 (279.751576ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.94s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.94s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "427.441155ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "66.387829ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (6.91s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-310311 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3461277645/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1764689744706497290" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3461277645/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1764689744706497290" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3461277645/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1764689744706497290" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3461277645/001/test-1764689744706497290
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-310311 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (339.431932ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 15:35:45.046277  268099 retry.go:31] will retry after 498.577533ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  2 15:35 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  2 15:35 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  2 15:35 test-1764689744706497290
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 ssh cat /mount-9p/test-1764689744706497290
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-310311 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [d1102ff4-64ff-41d9-8861-21d7bdc8ee6c] Pending
helpers_test.go:352: "busybox-mount" [d1102ff4-64ff-41d9-8861-21d7bdc8ee6c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [d1102ff4-64ff-41d9-8861-21d7bdc8ee6c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [d1102ff4-64ff-41d9-8861-21d7bdc8ee6c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.002753949s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-310311 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-310311 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3461277645/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (6.91s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "352.321045ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "64.1145ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-310311 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-310311 image ls --format short --alsologtostderr:
I1202 15:36:28.299811  331939 out.go:360] Setting OutFile to fd 1 ...
I1202 15:36:28.300061  331939 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:36:28.300072  331939 out.go:374] Setting ErrFile to fd 2...
I1202 15:36:28.300075  331939 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:36:28.300263  331939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
I1202 15:36:28.300803  331939 config.go:182] Loaded profile config "functional-310311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 15:36:28.300893  331939 config.go:182] Loaded profile config "functional-310311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 15:36:28.301337  331939 cli_runner.go:164] Run: docker container inspect functional-310311 --format={{.State.Status}}
I1202 15:36:28.319368  331939 ssh_runner.go:195] Run: systemctl --version
I1202 15:36:28.319411  331939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-310311
I1202 15:36:28.336745  331939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/functional-310311/id_rsa Username:docker}
I1202 15:36:28.435084  331939 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-310311 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ aa5e3ebc0dfed │ 79.2MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0     │ aa9d02839d8de │ 90.8MB │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0     │ 45f3cc72d235f │ 76.9MB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0     │ 8a4ded35a3eb1 │ 72MB   │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 740kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0     │ 7bb6219ddab95 │ 52.7MB │
│ docker.io/library/nginx                 │ alpine             │ d4918ca78576a │ 54.3MB │
│ docker.io/library/nginx                 │ latest             │ 60adc2e137e75 │ 155MB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-310311 image ls --format table --alsologtostderr:
I1202 15:36:30.401280  332304 out.go:360] Setting OutFile to fd 1 ...
I1202 15:36:30.401563  332304 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:36:30.401574  332304 out.go:374] Setting ErrFile to fd 2...
I1202 15:36:30.401578  332304 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:36:30.401789  332304 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
I1202 15:36:30.402415  332304 config.go:182] Loaded profile config "functional-310311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 15:36:30.402533  332304 config.go:182] Loaded profile config "functional-310311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 15:36:30.403030  332304 cli_runner.go:164] Run: docker container inspect functional-310311 --format={{.State.Status}}
I1202 15:36:30.423474  332304 ssh_runner.go:195] Run: systemctl --version
I1202 15:36:30.423528  332304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-310311
I1202 15:36:30.443845  332304 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/functional-310311/id_rsa Username:docker}
I1202 15:36:30.550314  332304 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-310311 image ls --format json --alsologtostderr:
[{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31468661"},{"id":"aa5e3ebc
0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:dfca5e5f4caae19c3ac20d841ab02fe19647ef0dd97c41424007cceb417af7db"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79190589"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd
781c7","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54252718"},{"id":"8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810","repoDigests":["registry.k8s.io/kube-proxy@sha256:0ed737a63ad50cf0d7049b0bd88755be8d5bc9fb5e39efdece79639b998532f6"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"71976228"},{"id":"60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541"],"repoTags":["docker.io/library/nginx:latest"],"size":"155491845"},{"id":"45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5e3bd70d468022881b995e23abf02a2d39ee87ebacd7018f6c478d9e01870b8b"],"repoTags":["registry.k8s.io/kube
-controller-manager:v1.35.0-beta.0"],"size":"76869776"},{"id":"7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","repoDigests":["registry.k8s.io/kube-scheduler@sha256:f852fad6b028092c481b57e7fcd16936a8aec43c2e4dccf5a0600946a449c2a3"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"52744336"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/bu
sybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:09c404d47c88be54eaaf0af6edaecdc1a417bcf04522ffeaf62c4dc0ed5a6d10"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63582165"},{"id":"aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":["registry.k8s.io/kube-apiserver@sha256:dd50de52ebf30a673c65da77c8b4af5cbc6be3c475a2d8165796a7a7bdd0b9d5"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"90816810"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id
":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:a8ad62a46c568df922febd0986d02f88bfe5e1a8f5e8dd0bd02a0cafffba019b"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"739536"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-310311 image ls --format json --alsologtostderr:
I1202 15:36:30.168434  332251 out.go:360] Setting OutFile to fd 1 ...
I1202 15:36:30.168755  332251 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:36:30.168765  332251 out.go:374] Setting ErrFile to fd 2...
I1202 15:36:30.168770  332251 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:36:30.168989  332251 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
I1202 15:36:30.169598  332251 config.go:182] Loaded profile config "functional-310311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 15:36:30.169707  332251 config.go:182] Loaded profile config "functional-310311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 15:36:30.170180  332251 cli_runner.go:164] Run: docker container inspect functional-310311 --format={{.State.Status}}
I1202 15:36:30.188774  332251 ssh_runner.go:195] Run: systemctl --version
I1202 15:36:30.188829  332251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-310311
I1202 15:36:30.207413  332251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/functional-310311/id_rsa Username:docker}
I1202 15:36:30.305685  332251 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-310311 image ls --format yaml --alsologtostderr:
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54252718"
- id: 60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541
repoTags:
- docker.io/library/nginx:latest
size: "155491845"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:09c404d47c88be54eaaf0af6edaecdc1a417bcf04522ffeaf62c4dc0ed5a6d10
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63582165"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31468661"
- id: 45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5e3bd70d468022881b995e23abf02a2d39ee87ebacd7018f6c478d9e01870b8b
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "76869776"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0ed737a63ad50cf0d7049b0bd88755be8d5bc9fb5e39efdece79639b998532f6
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "71976228"
- id: 7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:f852fad6b028092c481b57e7fcd16936a8aec43c2e4dccf5a0600946a449c2a3
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "52744336"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:a8ad62a46c568df922febd0986d02f88bfe5e1a8f5e8dd0bd02a0cafffba019b
repoTags:
- registry.k8s.io/pause:3.10.1
size: "739536"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:dfca5e5f4caae19c3ac20d841ab02fe19647ef0dd97c41424007cceb417af7db
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79190589"
- id: aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:dd50de52ebf30a673c65da77c8b4af5cbc6be3c475a2d8165796a7a7bdd0b9d5
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "90816810"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-310311 image ls --format yaml --alsologtostderr:
I1202 15:36:29.933842  332199 out.go:360] Setting OutFile to fd 1 ...
I1202 15:36:29.934140  332199 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:36:29.934149  332199 out.go:374] Setting ErrFile to fd 2...
I1202 15:36:29.934153  332199 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:36:29.934381  332199 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
I1202 15:36:29.934926  332199 config.go:182] Loaded profile config "functional-310311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 15:36:29.935027  332199 config.go:182] Loaded profile config "functional-310311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 15:36:29.935478  332199 cli_runner.go:164] Run: docker container inspect functional-310311 --format={{.State.Status}}
I1202 15:36:29.953712  332199 ssh_runner.go:195] Run: systemctl --version
I1202 15:36:29.953769  332199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-310311
I1202 15:36:29.971076  332199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/functional-310311/id_rsa Username:docker}
I1202 15:36:30.070549  332199 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-310311 ssh pgrep buildkitd: exit status 1 (272.346953ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 image build -t localhost/my-image:functional-310311 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-310311 image build -t localhost/my-image:functional-310311 testdata/build --alsologtostderr: (2.886304079s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-310311 image build -t localhost/my-image:functional-310311 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 64f0237adfc
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-310311
--> f45328fe0de
Successfully tagged localhost/my-image:functional-310311
f45328fe0def759dd002c3ef9ed8b27c1390c0ea90ebe767be9866cf13d5598f
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-310311 image build -t localhost/my-image:functional-310311 testdata/build --alsologtostderr:
I1202 15:36:28.800928  332116 out.go:360] Setting OutFile to fd 1 ...
I1202 15:36:28.801175  332116 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:36:28.801184  332116 out.go:374] Setting ErrFile to fd 2...
I1202 15:36:28.801189  332116 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:36:28.801372  332116 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
I1202 15:36:28.802130  332116 config.go:182] Loaded profile config "functional-310311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 15:36:28.802956  332116 config.go:182] Loaded profile config "functional-310311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 15:36:28.803481  332116 cli_runner.go:164] Run: docker container inspect functional-310311 --format={{.State.Status}}
I1202 15:36:28.821642  332116 ssh_runner.go:195] Run: systemctl --version
I1202 15:36:28.821699  332116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-310311
I1202 15:36:28.838918  332116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/functional-310311/id_rsa Username:docker}
I1202 15:36:28.936197  332116 build_images.go:162] Building image from path: /tmp/build.825678322.tar
I1202 15:36:28.936282  332116 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1202 15:36:28.944225  332116 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.825678322.tar
I1202 15:36:28.947939  332116 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.825678322.tar: stat -c "%s %y" /var/lib/minikube/build/build.825678322.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.825678322.tar': No such file or directory
I1202 15:36:28.947964  332116 ssh_runner.go:362] scp /tmp/build.825678322.tar --> /var/lib/minikube/build/build.825678322.tar (3072 bytes)
I1202 15:36:28.965848  332116 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.825678322
I1202 15:36:28.973528  332116 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.825678322 -xf /var/lib/minikube/build/build.825678322.tar
I1202 15:36:28.981231  332116 crio.go:315] Building image: /var/lib/minikube/build/build.825678322
I1202 15:36:28.981283  332116 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-310311 /var/lib/minikube/build/build.825678322 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1202 15:36:31.606761  332116 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-310311 /var/lib/minikube/build/build.825678322 --cgroup-manager=cgroupfs: (2.625441976s)
I1202 15:36:31.606828  332116 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.825678322
I1202 15:36:31.615525  332116 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.825678322.tar
I1202 15:36:31.623641  332116 build_images.go:218] Built localhost/my-image:functional-310311 from /tmp/build.825678322.tar
I1202 15:36:31.623689  332116 build_images.go:134] succeeded building to: functional-310311
I1202 15:36:31.623694  332116 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 image ls
E1202 15:37:24.681052  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:38:12.971684  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-298630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:38:12.978075  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-298630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:38:12.989528  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-298630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:38:13.010968  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-298630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:38:13.052470  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-298630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:38:13.134541  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-298630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:38:13.296138  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-298630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:38:13.617959  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-298630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:38:14.259854  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-298630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:38:15.541499  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-298630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:38:18.103530  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-298630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:38:23.225250  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-298630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:38:33.467191  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-298630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:38:47.747323  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:38:53.948577  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-298630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:39:34.910303  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-298630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:40:56.832062  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-298630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:42:24.680494  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:43:12.972178  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-298630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:43:40.674303  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-298630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-310311
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (2.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-310311 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1095349658/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-310311 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (306.271278ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 15:35:51.924839  268099 retry.go:31] will retry after 606.587682ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-310311 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1095349658/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-310311 ssh "sudo umount -f /mount-9p": exit status 1 (294.940584ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-310311 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-310311 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1095349658/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (2.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 image rm kicbase/echo-server:functional-310311 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.86s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-310311 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3914078014/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-310311 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3914078014/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-310311 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3914078014/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-310311 ssh "findmnt -T" /mount1: exit status 1 (378.250955ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 15:35:54.019212  268099 retry.go:31] will retry after 522.819677ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-310311 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-310311 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3914078014/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-310311 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3914078014/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-310311 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3914078014/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.86s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-310311 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-310311 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-310311 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-310311 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 330503: os: process already finished
helpers_test.go:525: unable to kill pid 330316: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-310311 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (9.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-310311 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [6493c8da-b149-4d13-8aee-41379d63fcb9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [6493c8da-b149-4d13-8aee-41379d63fcb9] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003692585s
I1202 15:36:25.717143  268099 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (9.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-310311 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.216.177 is working!
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-310311 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (1.76s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-310311 service list: (1.759454417s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (1.76s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (1.76s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-310311 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-310311 service list -o json: (1.763013869s)
functional_test.go:1504: Took "1.763155821s" to run "out/minikube-linux-amd64 -p functional-310311 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (1.76s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-310311
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-310311
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-310311
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (121.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1202 15:47:24.680942  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-919679 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m1.155751554s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (121.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-919679 kubectl -- rollout status deployment/busybox: (3.166538836s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 kubectl -- exec busybox-7b57f96db7-bqqvf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 kubectl -- exec busybox-7b57f96db7-m79rf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 kubectl -- exec busybox-7b57f96db7-nvx2p -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 kubectl -- exec busybox-7b57f96db7-bqqvf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 kubectl -- exec busybox-7b57f96db7-m79rf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 kubectl -- exec busybox-7b57f96db7-nvx2p -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 kubectl -- exec busybox-7b57f96db7-bqqvf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 kubectl -- exec busybox-7b57f96db7-m79rf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 kubectl -- exec busybox-7b57f96db7-nvx2p -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 kubectl -- exec busybox-7b57f96db7-bqqvf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 kubectl -- exec busybox-7b57f96db7-bqqvf -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 kubectl -- exec busybox-7b57f96db7-m79rf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 kubectl -- exec busybox-7b57f96db7-m79rf -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 kubectl -- exec busybox-7b57f96db7-nvx2p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 kubectl -- exec busybox-7b57f96db7-nvx2p -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (27.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 node add --alsologtostderr -v 5
E1202 15:48:12.971355  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-298630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-919679 node add --alsologtostderr -v 5: (26.257052818s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (27.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-919679 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 cp testdata/cp-test.txt ha-919679:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 ssh -n ha-919679 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 cp ha-919679:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile852628386/001/cp-test_ha-919679.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 ssh -n ha-919679 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 cp ha-919679:/home/docker/cp-test.txt ha-919679-m02:/home/docker/cp-test_ha-919679_ha-919679-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 ssh -n ha-919679 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 ssh -n ha-919679-m02 "sudo cat /home/docker/cp-test_ha-919679_ha-919679-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 cp ha-919679:/home/docker/cp-test.txt ha-919679-m03:/home/docker/cp-test_ha-919679_ha-919679-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 ssh -n ha-919679 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 ssh -n ha-919679-m03 "sudo cat /home/docker/cp-test_ha-919679_ha-919679-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 cp ha-919679:/home/docker/cp-test.txt ha-919679-m04:/home/docker/cp-test_ha-919679_ha-919679-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 ssh -n ha-919679 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 ssh -n ha-919679-m04 "sudo cat /home/docker/cp-test_ha-919679_ha-919679-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 cp testdata/cp-test.txt ha-919679-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 ssh -n ha-919679-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 cp ha-919679-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile852628386/001/cp-test_ha-919679-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 ssh -n ha-919679-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 cp ha-919679-m02:/home/docker/cp-test.txt ha-919679:/home/docker/cp-test_ha-919679-m02_ha-919679.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 ssh -n ha-919679-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 ssh -n ha-919679 "sudo cat /home/docker/cp-test_ha-919679-m02_ha-919679.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 cp ha-919679-m02:/home/docker/cp-test.txt ha-919679-m03:/home/docker/cp-test_ha-919679-m02_ha-919679-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 ssh -n ha-919679-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 ssh -n ha-919679-m03 "sudo cat /home/docker/cp-test_ha-919679-m02_ha-919679-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 cp ha-919679-m02:/home/docker/cp-test.txt ha-919679-m04:/home/docker/cp-test_ha-919679-m02_ha-919679-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 ssh -n ha-919679-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 ssh -n ha-919679-m04 "sudo cat /home/docker/cp-test_ha-919679-m02_ha-919679-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 cp testdata/cp-test.txt ha-919679-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 ssh -n ha-919679-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 cp ha-919679-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile852628386/001/cp-test_ha-919679-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 ssh -n ha-919679-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 cp ha-919679-m03:/home/docker/cp-test.txt ha-919679:/home/docker/cp-test_ha-919679-m03_ha-919679.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 ssh -n ha-919679-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 ssh -n ha-919679 "sudo cat /home/docker/cp-test_ha-919679-m03_ha-919679.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 cp ha-919679-m03:/home/docker/cp-test.txt ha-919679-m02:/home/docker/cp-test_ha-919679-m03_ha-919679-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 ssh -n ha-919679-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 ssh -n ha-919679-m02 "sudo cat /home/docker/cp-test_ha-919679-m03_ha-919679-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 cp ha-919679-m03:/home/docker/cp-test.txt ha-919679-m04:/home/docker/cp-test_ha-919679-m03_ha-919679-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 ssh -n ha-919679-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 ssh -n ha-919679-m04 "sudo cat /home/docker/cp-test_ha-919679-m03_ha-919679-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 cp testdata/cp-test.txt ha-919679-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 ssh -n ha-919679-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 cp ha-919679-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile852628386/001/cp-test_ha-919679-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 ssh -n ha-919679-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 cp ha-919679-m04:/home/docker/cp-test.txt ha-919679:/home/docker/cp-test_ha-919679-m04_ha-919679.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 ssh -n ha-919679-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 ssh -n ha-919679 "sudo cat /home/docker/cp-test_ha-919679-m04_ha-919679.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 cp ha-919679-m04:/home/docker/cp-test.txt ha-919679-m02:/home/docker/cp-test_ha-919679-m04_ha-919679-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 ssh -n ha-919679-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 ssh -n ha-919679-m02 "sudo cat /home/docker/cp-test_ha-919679-m04_ha-919679-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 cp ha-919679-m04:/home/docker/cp-test.txt ha-919679-m03:/home/docker/cp-test_ha-919679-m04_ha-919679-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 ssh -n ha-919679-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 ssh -n ha-919679-m03 "sudo cat /home/docker/cp-test_ha-919679-m04_ha-919679-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-919679 node stop m02 --alsologtostderr -v 5: (13.131410667s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-919679 status --alsologtostderr -v 5: exit status 7 (736.105142ms)

                                                
                                                
-- stdout --
	ha-919679
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-919679-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-919679-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-919679-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 15:49:02.748306  356451 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:49:02.748645  356451 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:49:02.748660  356451 out.go:374] Setting ErrFile to fd 2...
	I1202 15:49:02.748667  356451 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:49:02.748983  356451 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 15:49:02.749199  356451 out.go:368] Setting JSON to false
	I1202 15:49:02.749226  356451 mustload.go:66] Loading cluster: ha-919679
	I1202 15:49:02.749356  356451 notify.go:221] Checking for updates...
	I1202 15:49:02.749753  356451 config.go:182] Loaded profile config "ha-919679": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 15:49:02.749774  356451 status.go:174] checking status of ha-919679 ...
	I1202 15:49:02.750296  356451 cli_runner.go:164] Run: docker container inspect ha-919679 --format={{.State.Status}}
	I1202 15:49:02.770408  356451 status.go:371] ha-919679 host status = "Running" (err=<nil>)
	I1202 15:49:02.770481  356451 host.go:66] Checking if "ha-919679" exists ...
	I1202 15:49:02.770793  356451 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-919679
	I1202 15:49:02.790899  356451 host.go:66] Checking if "ha-919679" exists ...
	I1202 15:49:02.791270  356451 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 15:49:02.791324  356451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-919679
	I1202 15:49:02.810415  356451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/ha-919679/id_rsa Username:docker}
	I1202 15:49:02.909464  356451 ssh_runner.go:195] Run: systemctl --version
	I1202 15:49:02.916189  356451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 15:49:02.929573  356451 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:49:02.995191  356451 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-02 15:49:02.983960097 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:49:02.995846  356451 kubeconfig.go:125] found "ha-919679" server: "https://192.168.49.254:8443"
	I1202 15:49:02.995881  356451 api_server.go:166] Checking apiserver status ...
	I1202 15:49:02.995926  356451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 15:49:03.008824  356451 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1252/cgroup
	W1202 15:49:03.017685  356451 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1252/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1202 15:49:03.017739  356451 ssh_runner.go:195] Run: ls
	I1202 15:49:03.021849  356451 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1202 15:49:03.027792  356451 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1202 15:49:03.027823  356451 status.go:463] ha-919679 apiserver status = Running (err=<nil>)
	I1202 15:49:03.027834  356451 status.go:176] ha-919679 status: &{Name:ha-919679 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 15:49:03.027853  356451 status.go:174] checking status of ha-919679-m02 ...
	I1202 15:49:03.028087  356451 cli_runner.go:164] Run: docker container inspect ha-919679-m02 --format={{.State.Status}}
	I1202 15:49:03.047229  356451 status.go:371] ha-919679-m02 host status = "Stopped" (err=<nil>)
	I1202 15:49:03.047256  356451 status.go:384] host is not running, skipping remaining checks
	I1202 15:49:03.047264  356451 status.go:176] ha-919679-m02 status: &{Name:ha-919679-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 15:49:03.047290  356451 status.go:174] checking status of ha-919679-m03 ...
	I1202 15:49:03.047632  356451 cli_runner.go:164] Run: docker container inspect ha-919679-m03 --format={{.State.Status}}
	I1202 15:49:03.066324  356451 status.go:371] ha-919679-m03 host status = "Running" (err=<nil>)
	I1202 15:49:03.066381  356451 host.go:66] Checking if "ha-919679-m03" exists ...
	I1202 15:49:03.066791  356451 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-919679-m03
	I1202 15:49:03.085952  356451 host.go:66] Checking if "ha-919679-m03" exists ...
	I1202 15:49:03.086205  356451 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 15:49:03.086241  356451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-919679-m03
	I1202 15:49:03.105828  356451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32918 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/ha-919679-m03/id_rsa Username:docker}
	I1202 15:49:03.204776  356451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 15:49:03.217592  356451 kubeconfig.go:125] found "ha-919679" server: "https://192.168.49.254:8443"
	I1202 15:49:03.217624  356451 api_server.go:166] Checking apiserver status ...
	I1202 15:49:03.217664  356451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 15:49:03.230129  356451 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1161/cgroup
	W1202 15:49:03.239136  356451 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1161/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1202 15:49:03.239194  356451 ssh_runner.go:195] Run: ls
	I1202 15:49:03.243494  356451 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1202 15:49:03.248534  356451 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1202 15:49:03.248555  356451 status.go:463] ha-919679-m03 apiserver status = Running (err=<nil>)
	I1202 15:49:03.248564  356451 status.go:176] ha-919679-m03 status: &{Name:ha-919679-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 15:49:03.248617  356451 status.go:174] checking status of ha-919679-m04 ...
	I1202 15:49:03.248839  356451 cli_runner.go:164] Run: docker container inspect ha-919679-m04 --format={{.State.Status}}
	I1202 15:49:03.267312  356451 status.go:371] ha-919679-m04 host status = "Running" (err=<nil>)
	I1202 15:49:03.267347  356451 host.go:66] Checking if "ha-919679-m04" exists ...
	I1202 15:49:03.267638  356451 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-919679-m04
	I1202 15:49:03.288451  356451 host.go:66] Checking if "ha-919679-m04" exists ...
	I1202 15:49:03.288742  356451 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 15:49:03.288784  356451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-919679-m04
	I1202 15:49:03.308076  356451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32923 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/ha-919679-m04/id_rsa Username:docker}
	I1202 15:49:03.406835  356451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 15:49:03.419563  356451 status.go:176] ha-919679-m04 status: &{Name:ha-919679-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (23.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-919679 node start m02 --alsologtostderr -v 5: (22.671865416s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (23.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (112.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-919679 stop --alsologtostderr -v 5: (49.298030947s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 start --wait true --alsologtostderr -v 5
E1202 15:50:43.736798  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-310311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:50:43.743249  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-310311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:50:43.754607  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-310311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:50:43.776037  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-310311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:50:43.817512  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-310311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:50:43.898945  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-310311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:50:44.060556  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-310311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:50:44.382218  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-310311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:50:45.024206  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-310311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:50:46.305559  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-310311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:50:48.867154  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-310311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:50:53.988892  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-310311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:51:04.231239  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-310311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-919679 start --wait true --alsologtostderr -v 5: (1m2.964685891s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (112.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 node delete m03 --alsologtostderr -v 5
E1202 15:51:24.712923  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-310311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-919679 node delete m03 --alsologtostderr -v 5: (9.819424861s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (43.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 stop --alsologtostderr -v 5
E1202 15:52:05.674579  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-310311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-919679 stop --alsologtostderr -v 5: (43.565410744s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-919679 status --alsologtostderr -v 5: exit status 7 (120.812359ms)

                                                
                                                
-- stdout --
	ha-919679
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-919679-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-919679-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 15:52:16.251391  370606 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:52:16.251512  370606 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:52:16.251520  370606 out.go:374] Setting ErrFile to fd 2...
	I1202 15:52:16.251524  370606 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:52:16.251743  370606 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 15:52:16.251902  370606 out.go:368] Setting JSON to false
	I1202 15:52:16.251935  370606 mustload.go:66] Loading cluster: ha-919679
	I1202 15:52:16.252080  370606 notify.go:221] Checking for updates...
	I1202 15:52:16.252293  370606 config.go:182] Loaded profile config "ha-919679": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 15:52:16.252307  370606 status.go:174] checking status of ha-919679 ...
	I1202 15:52:16.252749  370606 cli_runner.go:164] Run: docker container inspect ha-919679 --format={{.State.Status}}
	I1202 15:52:16.271942  370606 status.go:371] ha-919679 host status = "Stopped" (err=<nil>)
	I1202 15:52:16.271993  370606 status.go:384] host is not running, skipping remaining checks
	I1202 15:52:16.272003  370606 status.go:176] ha-919679 status: &{Name:ha-919679 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 15:52:16.272028  370606 status.go:174] checking status of ha-919679-m02 ...
	I1202 15:52:16.272283  370606 cli_runner.go:164] Run: docker container inspect ha-919679-m02 --format={{.State.Status}}
	I1202 15:52:16.291512  370606 status.go:371] ha-919679-m02 host status = "Stopped" (err=<nil>)
	I1202 15:52:16.291537  370606 status.go:384] host is not running, skipping remaining checks
	I1202 15:52:16.291545  370606 status.go:176] ha-919679-m02 status: &{Name:ha-919679-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 15:52:16.291592  370606 status.go:174] checking status of ha-919679-m04 ...
	I1202 15:52:16.291848  370606 cli_runner.go:164] Run: docker container inspect ha-919679-m04 --format={{.State.Status}}
	I1202 15:52:16.309464  370606 status.go:371] ha-919679-m04 host status = "Stopped" (err=<nil>)
	I1202 15:52:16.309491  370606 status.go:384] host is not running, skipping remaining checks
	I1202 15:52:16.309499  370606 status.go:176] ha-919679-m04 status: &{Name:ha-919679-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (43.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (53.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1202 15:52:24.680042  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-919679 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (52.231299244s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (53.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (42.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 node add --control-plane --alsologtostderr -v 5
E1202 15:53:12.971584  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-298630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:53:27.597561  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-310311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-919679 node add --control-plane --alsologtostderr -v 5: (42.022858521s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-919679 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (42.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.95s)

                                                
                                    
x
+
TestJSONOutput/start/Command (35.99s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-448937 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1202 15:54:36.036409  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-298630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-448937 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (35.989226278s)
--- PASS: TestJSONOutput/start/Command (35.99s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.11s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-448937 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-448937 --output=json --user=testUser: (6.106710944s)
--- PASS: TestJSONOutput/stop/Command (6.11s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-671407 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-671407 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (82.250958ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"eae72053-e1c4-4de2-a79d-62251084216e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-671407] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e2b2456c-4896-4e47-ad3b-20d1f6c43fe7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22021"}}
	{"specversion":"1.0","id":"29606f57-28a8-4b9e-9d21-15d0bd0c1b94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ab19f276-574c-4a91-bfe6-8f2cf400b089","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22021-264555/kubeconfig"}}
	{"specversion":"1.0","id":"3e596923-721f-4573-90b1-20a403be6163","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-264555/.minikube"}}
	{"specversion":"1.0","id":"c728d232-31a9-4e27-8f52-89789445b8fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"21bd247b-44fa-4757-aaf9-4971a333cec3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9263cb8d-123e-420b-842d-177544e54523","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-671407" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-671407
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (36.18s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-148062 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-148062 --network=: (33.993779653s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-148062" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-148062
E1202 15:55:27.748930  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-148062: (2.1591684s)
--- PASS: TestKicCustomNetwork/create_custom_network (36.18s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (26.06s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-811100 --network=bridge
E1202 15:55:43.736085  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-310311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-811100 --network=bridge: (24.058486706s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-811100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-811100
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-811100: (1.981230692s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (26.06s)

                                                
                                    
x
+
TestKicExistingNetwork (23.34s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1202 15:55:55.777789  268099 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1202 15:55:55.795487  268099 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1202 15:55:55.795563  268099 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1202 15:55:55.795588  268099 cli_runner.go:164] Run: docker network inspect existing-network
W1202 15:55:55.812655  268099 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1202 15:55:55.812709  268099 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1202 15:55:55.812726  268099 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1202 15:55:55.812893  268099 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1202 15:55:55.830477  268099 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-59c4d474daec IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:20:cf:7a:79:c5} reservation:<nil>}
I1202 15:55:55.830841  268099 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0019492f0}
I1202 15:55:55.830873  268099 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1202 15:55:55.830914  268099 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1202 15:55:55.882476  268099 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-504755 --network=existing-network
E1202 15:56:11.443607  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-310311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-504755 --network=existing-network: (21.156869807s)
helpers_test.go:175: Cleaning up "existing-network-504755" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-504755
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-504755: (2.04358902s)
I1202 15:56:19.102624  268099 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (23.34s)

                                                
                                    
x
+
TestKicCustomSubnet (23.64s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-777221 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-777221 --subnet=192.168.60.0/24: (21.462300382s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-777221 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-777221" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-777221
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-777221: (2.160740756s)
--- PASS: TestKicCustomSubnet (23.64s)

                                                
                                    
x
+
TestKicStaticIP (24.28s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-649093 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-649093 --static-ip=192.168.200.200: (21.934946194s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-649093 ip
helpers_test.go:175: Cleaning up "static-ip-649093" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-649093
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-649093: (2.194014552s)
--- PASS: TestKicStaticIP (24.28s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (45.62s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-399665 --driver=docker  --container-runtime=crio
E1202 15:57:24.680612  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-399665 --driver=docker  --container-runtime=crio: (20.3900664s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-401921 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-401921 --driver=docker  --container-runtime=crio: (19.160842716s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-399665
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-401921
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-401921" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-401921
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-401921: (2.38067971s)
helpers_test.go:175: Cleaning up "first-399665" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-399665
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-399665: (2.397339887s)
--- PASS: TestMinikubeProfile (45.62s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.89s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-687110 --memory=3072 --mount-string /tmp/TestMountStartserial2352145672/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-687110 --memory=3072 --mount-string /tmp/TestMountStartserial2352145672/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.887021347s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-687110 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (4.97s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-699773 --memory=3072 --mount-string /tmp/TestMountStartserial2352145672/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-699773 --memory=3072 --mount-string /tmp/TestMountStartserial2352145672/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (3.972266822s)
--- PASS: TestMountStart/serial/StartWithMountSecond (4.97s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-699773 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-687110 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-687110 --alsologtostderr -v=5: (1.715271551s)
--- PASS: TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-699773 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-699773
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-699773: (1.2624344s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.63s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-699773
E1202 15:58:12.973605  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-298630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-699773: (7.634278846s)
--- PASS: TestMountStart/serial/RestartStopped (8.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-699773 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (63.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-732778 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-732778 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m3.178794583s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-732778 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (63.70s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-732778 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-732778 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-732778 -- rollout status deployment/busybox: (2.818592298s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-732778 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-732778 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-732778 -- exec busybox-7b57f96db7-h6kk6 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-732778 -- exec busybox-7b57f96db7-xl79x -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-732778 -- exec busybox-7b57f96db7-h6kk6 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-732778 -- exec busybox-7b57f96db7-xl79x -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-732778 -- exec busybox-7b57f96db7-h6kk6 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-732778 -- exec busybox-7b57f96db7-xl79x -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.23s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-732778 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-732778 -- exec busybox-7b57f96db7-h6kk6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-732778 -- exec busybox-7b57f96db7-h6kk6 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-732778 -- exec busybox-7b57f96db7-xl79x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-732778 -- exec busybox-7b57f96db7-xl79x -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.73s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (23.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-732778 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-732778 -v=5 --alsologtostderr: (23.195442555s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-732778 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (23.88s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-732778 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.69s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-732778 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-732778 cp testdata/cp-test.txt multinode-732778:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-732778 ssh -n multinode-732778 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-732778 cp multinode-732778:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4196783782/001/cp-test_multinode-732778.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-732778 ssh -n multinode-732778 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-732778 cp multinode-732778:/home/docker/cp-test.txt multinode-732778-m02:/home/docker/cp-test_multinode-732778_multinode-732778-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-732778 ssh -n multinode-732778 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-732778 ssh -n multinode-732778-m02 "sudo cat /home/docker/cp-test_multinode-732778_multinode-732778-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-732778 cp multinode-732778:/home/docker/cp-test.txt multinode-732778-m03:/home/docker/cp-test_multinode-732778_multinode-732778-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-732778 ssh -n multinode-732778 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-732778 ssh -n multinode-732778-m03 "sudo cat /home/docker/cp-test_multinode-732778_multinode-732778-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-732778 cp testdata/cp-test.txt multinode-732778-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-732778 ssh -n multinode-732778-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-732778 cp multinode-732778-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4196783782/001/cp-test_multinode-732778-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-732778 ssh -n multinode-732778-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-732778 cp multinode-732778-m02:/home/docker/cp-test.txt multinode-732778:/home/docker/cp-test_multinode-732778-m02_multinode-732778.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-732778 ssh -n multinode-732778-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-732778 ssh -n multinode-732778 "sudo cat /home/docker/cp-test_multinode-732778-m02_multinode-732778.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-732778 cp multinode-732778-m02:/home/docker/cp-test.txt multinode-732778-m03:/home/docker/cp-test_multinode-732778-m02_multinode-732778-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-732778 ssh -n multinode-732778-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-732778 ssh -n multinode-732778-m03 "sudo cat /home/docker/cp-test_multinode-732778-m02_multinode-732778-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-732778 cp testdata/cp-test.txt multinode-732778-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-732778 ssh -n multinode-732778-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-732778 cp multinode-732778-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4196783782/001/cp-test_multinode-732778-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-732778 ssh -n multinode-732778-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-732778 cp multinode-732778-m03:/home/docker/cp-test.txt multinode-732778:/home/docker/cp-test_multinode-732778-m03_multinode-732778.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-732778 ssh -n multinode-732778-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-732778 ssh -n multinode-732778 "sudo cat /home/docker/cp-test_multinode-732778-m03_multinode-732778.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-732778 cp multinode-732778-m03:/home/docker/cp-test.txt multinode-732778-m02:/home/docker/cp-test_multinode-732778-m03_multinode-732778-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-732778 ssh -n multinode-732778-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-732778 ssh -n multinode-732778-m02 "sudo cat /home/docker/cp-test_multinode-732778-m03_multinode-732778-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.23s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-732778 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-732778 node stop m03: (1.280255354s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-732778 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-732778 status: exit status 7 (522.566049ms)

                                                
                                                
-- stdout --
	multinode-732778
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-732778-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-732778-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-732778 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-732778 status --alsologtostderr: exit status 7 (529.810019ms)

                                                
                                                
-- stdout --
	multinode-732778
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-732778-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-732778-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 16:00:05.569471  429942 out.go:360] Setting OutFile to fd 1 ...
	I1202 16:00:05.569592  429942 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:00:05.569602  429942 out.go:374] Setting ErrFile to fd 2...
	I1202 16:00:05.569609  429942 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:00:05.569878  429942 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 16:00:05.570101  429942 out.go:368] Setting JSON to false
	I1202 16:00:05.570135  429942 mustload.go:66] Loading cluster: multinode-732778
	I1202 16:00:05.570787  429942 notify.go:221] Checking for updates...
	I1202 16:00:05.571453  429942 config.go:182] Loaded profile config "multinode-732778": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 16:00:05.571488  429942 status.go:174] checking status of multinode-732778 ...
	I1202 16:00:05.572730  429942 cli_runner.go:164] Run: docker container inspect multinode-732778 --format={{.State.Status}}
	I1202 16:00:05.591734  429942 status.go:371] multinode-732778 host status = "Running" (err=<nil>)
	I1202 16:00:05.591763  429942 host.go:66] Checking if "multinode-732778" exists ...
	I1202 16:00:05.592040  429942 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-732778
	I1202 16:00:05.612378  429942 host.go:66] Checking if "multinode-732778" exists ...
	I1202 16:00:05.612717  429942 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 16:00:05.612762  429942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-732778
	I1202 16:00:05.632539  429942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33029 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/multinode-732778/id_rsa Username:docker}
	I1202 16:00:05.731684  429942 ssh_runner.go:195] Run: systemctl --version
	I1202 16:00:05.739859  429942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 16:00:05.752569  429942 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 16:00:05.814182  429942 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:false NGoroutines:65 SystemTime:2025-12-02 16:00:05.802602064 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 16:00:05.814914  429942 kubeconfig.go:125] found "multinode-732778" server: "https://192.168.67.2:8443"
	I1202 16:00:05.814954  429942 api_server.go:166] Checking apiserver status ...
	I1202 16:00:05.815001  429942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 16:00:05.827282  429942 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1226/cgroup
	W1202 16:00:05.836134  429942 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1226/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1202 16:00:05.836184  429942 ssh_runner.go:195] Run: ls
	I1202 16:00:05.840475  429942 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1202 16:00:05.845342  429942 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1202 16:00:05.845375  429942 status.go:463] multinode-732778 apiserver status = Running (err=<nil>)
	I1202 16:00:05.845389  429942 status.go:176] multinode-732778 status: &{Name:multinode-732778 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 16:00:05.845416  429942 status.go:174] checking status of multinode-732778-m02 ...
	I1202 16:00:05.845791  429942 cli_runner.go:164] Run: docker container inspect multinode-732778-m02 --format={{.State.Status}}
	I1202 16:00:05.864040  429942 status.go:371] multinode-732778-m02 host status = "Running" (err=<nil>)
	I1202 16:00:05.864067  429942 host.go:66] Checking if "multinode-732778-m02" exists ...
	I1202 16:00:05.864320  429942 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-732778-m02
	I1202 16:00:05.884054  429942 host.go:66] Checking if "multinode-732778-m02" exists ...
	I1202 16:00:05.884311  429942 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 16:00:05.884348  429942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-732778-m02
	I1202 16:00:05.903588  429942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33034 SSHKeyPath:/home/jenkins/minikube-integration/22021-264555/.minikube/machines/multinode-732778-m02/id_rsa Username:docker}
	I1202 16:00:06.001925  429942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 16:00:06.015114  429942 status.go:176] multinode-732778-m02 status: &{Name:multinode-732778-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1202 16:00:06.015160  429942 status.go:174] checking status of multinode-732778-m03 ...
	I1202 16:00:06.015523  429942 cli_runner.go:164] Run: docker container inspect multinode-732778-m03 --format={{.State.Status}}
	I1202 16:00:06.036554  429942 status.go:371] multinode-732778-m03 host status = "Stopped" (err=<nil>)
	I1202 16:00:06.036597  429942 status.go:384] host is not running, skipping remaining checks
	I1202 16:00:06.036606  429942 status.go:176] multinode-732778-m03 status: &{Name:multinode-732778-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.33s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-732778 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-732778 node start m03 -v=5 --alsologtostderr: (6.613337923s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-732778 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.36s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (78.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-732778
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-732778
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-732778: (29.564004902s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-732778 --wait=true -v=5 --alsologtostderr
E1202 16:00:43.736775  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-310311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-732778 --wait=true -v=5 --alsologtostderr: (49.179475151s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-732778
--- PASS: TestMultiNode/serial/RestartKeepsNodes (78.87s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-732778 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-732778 node delete m03: (4.655978572s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-732778 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.28s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (28.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-732778 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-732778 stop: (28.491670067s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-732778 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-732778 status: exit status 7 (106.591464ms)

                                                
                                                
-- stdout --
	multinode-732778
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-732778-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-732778 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-732778 status --alsologtostderr: exit status 7 (100.813053ms)

                                                
                                                
-- stdout --
	multinode-732778
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-732778-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 16:02:06.210304  439723 out.go:360] Setting OutFile to fd 1 ...
	I1202 16:02:06.210402  439723 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:02:06.210410  439723 out.go:374] Setting ErrFile to fd 2...
	I1202 16:02:06.210414  439723 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:02:06.210646  439723 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 16:02:06.210809  439723 out.go:368] Setting JSON to false
	I1202 16:02:06.210835  439723 mustload.go:66] Loading cluster: multinode-732778
	I1202 16:02:06.210890  439723 notify.go:221] Checking for updates...
	I1202 16:02:06.211214  439723 config.go:182] Loaded profile config "multinode-732778": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 16:02:06.211229  439723 status.go:174] checking status of multinode-732778 ...
	I1202 16:02:06.211774  439723 cli_runner.go:164] Run: docker container inspect multinode-732778 --format={{.State.Status}}
	I1202 16:02:06.230537  439723 status.go:371] multinode-732778 host status = "Stopped" (err=<nil>)
	I1202 16:02:06.230560  439723 status.go:384] host is not running, skipping remaining checks
	I1202 16:02:06.230567  439723 status.go:176] multinode-732778 status: &{Name:multinode-732778 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 16:02:06.230593  439723 status.go:174] checking status of multinode-732778-m02 ...
	I1202 16:02:06.230853  439723 cli_runner.go:164] Run: docker container inspect multinode-732778-m02 --format={{.State.Status}}
	I1202 16:02:06.248942  439723 status.go:371] multinode-732778-m02 host status = "Stopped" (err=<nil>)
	I1202 16:02:06.248965  439723 status.go:384] host is not running, skipping remaining checks
	I1202 16:02:06.248995  439723 status.go:176] multinode-732778-m02 status: &{Name:multinode-732778-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (28.70s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (52.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-732778 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1202 16:02:24.680130  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-732778 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (51.521327981s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-732778 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (52.15s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-732778
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-732778-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-732778-m02 --driver=docker  --container-runtime=crio: exit status 14 (83.351938ms)

                                                
                                                
-- stdout --
	* [multinode-732778-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22021
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22021-264555/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-264555/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-732778-m02' is duplicated with machine name 'multinode-732778-m02' in profile 'multinode-732778'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-732778-m03 --driver=docker  --container-runtime=crio
E1202 16:03:12.974401  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-298630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-732778-m03 --driver=docker  --container-runtime=crio: (23.262936548s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-732778
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-732778: exit status 80 (296.017489ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-732778 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-732778-m03 already exists in multinode-732778-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-732778-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-732778-m03: (2.412706818s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.12s)

                                                
                                    
x
+
TestPreload (107.44s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-760936 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-760936 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (48.661511568s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-760936 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-760936 image pull gcr.io/k8s-minikube/busybox: (2.204162485s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-760936
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-760936: (6.253387784s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-760936 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-760936 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (47.642935953s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-760936 image list
helpers_test.go:175: Cleaning up "test-preload-760936" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-760936
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-760936: (2.439185974s)
--- PASS: TestPreload (107.44s)

                                                
                                    
x
+
TestScheduledStopUnix (99.44s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-259576 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-259576 --memory=3072 --driver=docker  --container-runtime=crio: (22.867600814s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-259576 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1202 16:05:39.112915  456840 out.go:360] Setting OutFile to fd 1 ...
	I1202 16:05:39.113151  456840 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:05:39.113160  456840 out.go:374] Setting ErrFile to fd 2...
	I1202 16:05:39.113164  456840 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:05:39.113356  456840 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 16:05:39.113620  456840 out.go:368] Setting JSON to false
	I1202 16:05:39.113714  456840 mustload.go:66] Loading cluster: scheduled-stop-259576
	I1202 16:05:39.114031  456840 config.go:182] Loaded profile config "scheduled-stop-259576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 16:05:39.114097  456840 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/scheduled-stop-259576/config.json ...
	I1202 16:05:39.114262  456840 mustload.go:66] Loading cluster: scheduled-stop-259576
	I1202 16:05:39.114370  456840 config.go:182] Loaded profile config "scheduled-stop-259576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-259576 -n scheduled-stop-259576
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-259576 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1202 16:05:39.518833  456990 out.go:360] Setting OutFile to fd 1 ...
	I1202 16:05:39.519184  456990 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:05:39.519197  456990 out.go:374] Setting ErrFile to fd 2...
	I1202 16:05:39.519205  456990 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:05:39.519791  456990 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 16:05:39.520364  456990 out.go:368] Setting JSON to false
	I1202 16:05:39.520611  456990 daemonize_unix.go:73] killing process 456875 as it is an old scheduled stop
	I1202 16:05:39.520725  456990 mustload.go:66] Loading cluster: scheduled-stop-259576
	I1202 16:05:39.521110  456990 config.go:182] Loaded profile config "scheduled-stop-259576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 16:05:39.521196  456990 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/scheduled-stop-259576/config.json ...
	I1202 16:05:39.521384  456990 mustload.go:66] Loading cluster: scheduled-stop-259576
	I1202 16:05:39.521525  456990 config.go:182] Loaded profile config "scheduled-stop-259576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1202 16:05:39.526396  268099 retry.go:31] will retry after 121.344µs: open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/scheduled-stop-259576/pid: no such file or directory
I1202 16:05:39.527570  268099 retry.go:31] will retry after 177.02µs: open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/scheduled-stop-259576/pid: no such file or directory
I1202 16:05:39.528746  268099 retry.go:31] will retry after 213.605µs: open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/scheduled-stop-259576/pid: no such file or directory
I1202 16:05:39.529904  268099 retry.go:31] will retry after 479.819µs: open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/scheduled-stop-259576/pid: no such file or directory
I1202 16:05:39.531077  268099 retry.go:31] will retry after 496.486µs: open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/scheduled-stop-259576/pid: no such file or directory
I1202 16:05:39.532260  268099 retry.go:31] will retry after 960.485µs: open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/scheduled-stop-259576/pid: no such file or directory
I1202 16:05:39.533414  268099 retry.go:31] will retry after 1.468307ms: open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/scheduled-stop-259576/pid: no such file or directory
I1202 16:05:39.535670  268099 retry.go:31] will retry after 2.350166ms: open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/scheduled-stop-259576/pid: no such file or directory
I1202 16:05:39.538946  268099 retry.go:31] will retry after 1.809608ms: open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/scheduled-stop-259576/pid: no such file or directory
I1202 16:05:39.541188  268099 retry.go:31] will retry after 4.9557ms: open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/scheduled-stop-259576/pid: no such file or directory
I1202 16:05:39.546497  268099 retry.go:31] will retry after 7.549407ms: open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/scheduled-stop-259576/pid: no such file or directory
I1202 16:05:39.554773  268099 retry.go:31] will retry after 6.017306ms: open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/scheduled-stop-259576/pid: no such file or directory
I1202 16:05:39.561065  268099 retry.go:31] will retry after 8.179962ms: open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/scheduled-stop-259576/pid: no such file or directory
I1202 16:05:39.570342  268099 retry.go:31] will retry after 21.268965ms: open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/scheduled-stop-259576/pid: no such file or directory
I1202 16:05:39.592623  268099 retry.go:31] will retry after 19.526091ms: open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/scheduled-stop-259576/pid: no such file or directory
I1202 16:05:39.612921  268099 retry.go:31] will retry after 35.116557ms: open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/scheduled-stop-259576/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-259576 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1202 16:05:43.736694  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-310311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-259576 -n scheduled-stop-259576
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-259576
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-259576 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1202 16:06:05.462084  457625 out.go:360] Setting OutFile to fd 1 ...
	I1202 16:06:05.462335  457625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:06:05.462344  457625 out.go:374] Setting ErrFile to fd 2...
	I1202 16:06:05.462348  457625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:06:05.462577  457625 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 16:06:05.462836  457625 out.go:368] Setting JSON to false
	I1202 16:06:05.462912  457625 mustload.go:66] Loading cluster: scheduled-stop-259576
	I1202 16:06:05.463262  457625 config.go:182] Loaded profile config "scheduled-stop-259576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 16:06:05.463337  457625 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/scheduled-stop-259576/config.json ...
	I1202 16:06:05.463592  457625 mustload.go:66] Loading cluster: scheduled-stop-259576
	I1202 16:06:05.463723  457625 config.go:182] Loaded profile config "scheduled-stop-259576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-259576
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-259576: exit status 7 (86.014643ms)

                                                
                                                
-- stdout --
	scheduled-stop-259576
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-259576 -n scheduled-stop-259576
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-259576 -n scheduled-stop-259576: exit status 7 (86.61144ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-259576" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-259576
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-259576: (4.974041957s)
--- PASS: TestScheduledStopUnix (99.44s)

                                                
                                    
x
+
TestInsufficientStorage (9.47s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-319725 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-319725 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (6.905763137s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d37b360e-e4db-4aee-8090-8412ad615dc8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-319725] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b58e9d84-4e9a-4de5-a375-a26ea2e56104","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22021"}}
	{"specversion":"1.0","id":"01faef46-95e7-4a37-af33-d9a41a305b75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"541a2cc2-bbcd-496a-915c-340d2484551b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22021-264555/kubeconfig"}}
	{"specversion":"1.0","id":"016d945f-5827-4f02-9605-834ff15d959c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-264555/.minikube"}}
	{"specversion":"1.0","id":"b6722076-3356-4225-95b0-1dd8abe8d07c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"94fb6169-44cd-49e4-95a7-462760624c40","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8acac8e4-c2aa-4a4e-b7e0-278d3497a747","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"32cea576-f40a-4ecc-ad2c-1f0fbe48811e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"7f01e4f0-4986-421b-851a-d962cad9db5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b4a1ee46-a694-4423-9b47-b1c9010dbdd2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"98dec200-2d7f-4ae0-a97d-5ddc9e335739","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-319725\" primary control-plane node in \"insufficient-storage-319725\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"c482f525-4659-49cc-be8b-41c70c3ff91b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1764169655-21974 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"d16547b7-3fce-472b-b384-a5e7d57d7779","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"0343d3cc-b262-4f9f-95ff-1c60ace38b8c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-319725 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-319725 --output=json --layout=cluster: exit status 7 (316.29113ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-319725","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-319725","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1202 16:07:02.830905  460149 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-319725" does not appear in /home/jenkins/minikube-integration/22021-264555/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-319725 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-319725 --output=json --layout=cluster: exit status 7 (309.042713ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-319725","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-319725","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1202 16:07:03.141416  460257 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-319725" does not appear in /home/jenkins/minikube-integration/22021-264555/kubeconfig
	E1202 16:07:03.153536  460257 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/insufficient-storage-319725/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-319725" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-319725
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-319725: (1.934971377s)
--- PASS: TestInsufficientStorage (9.47s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (326.85s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
E1202 16:07:06.804914  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-310311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.2666709899 start -p running-upgrade-136818 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.2666709899 start -p running-upgrade-136818 --memory=3072 --vm-driver=docker  --container-runtime=crio: (50.357156094s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-136818 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-136818 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m30.636880359s)
helpers_test.go:175: Cleaning up "running-upgrade-136818" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-136818
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-136818: (2.498313918s)
--- PASS: TestRunningBinaryUpgrade (326.85s)

                                                
                                    
x
+
TestKubernetesUpgrade (317.55s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-921401 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-921401 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.127740849s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-921401
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-921401: (1.927522434s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-921401 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-921401 status --format={{.Host}}: exit status 7 (81.115131ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-921401 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-921401 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m43.005786438s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-921401 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-921401 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
I1202 16:13:28.015746  268099 config.go:182] Loaded profile config "auto-589300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-921401 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (94.349172ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-921401] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22021
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22021-264555/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-264555/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-921401
	    minikube start -p kubernetes-upgrade-921401 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9214012 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-921401 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-921401 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-921401 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.141329313s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-921401" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-921401
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-921401: (3.107615986s)
--- PASS: TestKubernetesUpgrade (317.55s)

                                                
                                    
x
+
TestMissingContainerUpgrade (70.84s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.1789523927 start -p missing-upgrade-881462 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.1789523927 start -p missing-upgrade-881462 --memory=3072 --driver=docker  --container-runtime=crio: (21.670293183s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-881462
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-881462: (4.317622671s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-881462
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-881462 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-881462 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (38.886177527s)
helpers_test.go:175: Cleaning up "missing-upgrade-881462" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-881462
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-881462: (2.58053141s)
--- PASS: TestMissingContainerUpgrade (70.84s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.42s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.42s)

                                                
                                    
x
+
TestPause/serial/Start (59.49s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-907557 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-907557 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (59.487151841s)
--- PASS: TestPause/serial/Start (59.49s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (311.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.4270194913 start -p stopped-upgrade-937293 --memory=3072 --vm-driver=docker  --container-runtime=crio
E1202 16:07:24.680377  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.4270194913 start -p stopped-upgrade-937293 --memory=3072 --vm-driver=docker  --container-runtime=crio: (49.558103311s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.4270194913 -p stopped-upgrade-937293 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.4270194913 -p stopped-upgrade-937293 stop: (2.169986644s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-937293 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-937293 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m19.847401883s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (311.58s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.52s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-907557 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-907557 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.506700243s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-556855 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-556855 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (95.158525ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-556855] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22021
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22021-264555/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-264555/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (21.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-556855 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-556855 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (20.812842478s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-556855 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (21.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (23.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-556855 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-556855 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (20.951997104s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-556855 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-556855 status -o json: exit status 2 (350.425678ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-556855","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-556855
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-556855: (2.020791561s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (23.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-556855 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-556855 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (7.048051958s)
--- PASS: TestNoKubernetes/serial/Start (7.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22021-264555/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-556855 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-556855 "sudo systemctl is-active --quiet service kubelet": exit status 1 (294.02537ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (16.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (15.711846983s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (16.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-556855
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-556855: (1.283376854s)
--- PASS: TestNoKubernetes/serial/Stop (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-556855 --driver=docker  --container-runtime=crio
E1202 16:10:43.736571  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-310311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-556855 --driver=docker  --container-runtime=crio: (7.835395131s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-556855 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-556855 "sudo systemctl is-active --quiet service kubelet": exit status 1 (306.432608ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-589300 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-589300 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (171.777925ms)

                                                
                                                
-- stdout --
	* [false-589300] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22021
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22021-264555/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-264555/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 16:10:52.243102  511883 out.go:360] Setting OutFile to fd 1 ...
	I1202 16:10:52.243222  511883 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:10:52.243231  511883 out.go:374] Setting ErrFile to fd 2...
	I1202 16:10:52.243238  511883 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:10:52.243488  511883 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-264555/.minikube/bin
	I1202 16:10:52.244068  511883 out.go:368] Setting JSON to false
	I1202 16:10:52.245215  511883 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":10393,"bootTime":1764681459,"procs":296,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 16:10:52.245278  511883 start.go:143] virtualization: kvm guest
	I1202 16:10:52.247266  511883 out.go:179] * [false-589300] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 16:10:52.248451  511883 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 16:10:52.248477  511883 notify.go:221] Checking for updates...
	I1202 16:10:52.250505  511883 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 16:10:52.251772  511883 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-264555/kubeconfig
	I1202 16:10:52.253196  511883 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-264555/.minikube
	I1202 16:10:52.254505  511883 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 16:10:52.255774  511883 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 16:10:52.257393  511883 config.go:182] Loaded profile config "kubernetes-upgrade-921401": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 16:10:52.257601  511883 config.go:182] Loaded profile config "running-upgrade-136818": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1202 16:10:52.257727  511883 config.go:182] Loaded profile config "stopped-upgrade-937293": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1202 16:10:52.257854  511883 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 16:10:52.281971  511883 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 16:10:52.282127  511883 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 16:10:52.345843  511883 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-02 16:10:52.335569772 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 16:10:52.345950  511883 docker.go:319] overlay module found
	I1202 16:10:52.348045  511883 out.go:179] * Using the docker driver based on user configuration
	I1202 16:10:52.349401  511883 start.go:309] selected driver: docker
	I1202 16:10:52.349440  511883 start.go:927] validating driver "docker" against <nil>
	I1202 16:10:52.349458  511883 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 16:10:52.351043  511883 out.go:203] 
	W1202 16:10:52.352374  511883 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1202 16:10:52.353613  511883 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-589300 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-589300

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-589300

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-589300

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-589300

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-589300

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-589300

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-589300

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-589300

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-589300

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-589300

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589300"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589300"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589300"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-589300

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589300"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589300"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-589300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-589300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-589300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-589300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-589300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-589300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-589300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-589300" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589300"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589300"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589300"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589300"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589300"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-589300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-589300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-589300" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589300"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589300"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589300"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589300"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589300"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 02 Dec 2025 16:09:02 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-921401
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 02 Dec 2025 16:08:04 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: running-upgrade-136818
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 02 Dec 2025 16:08:08 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-937293
contexts:
- context:
cluster: kubernetes-upgrade-921401
user: kubernetes-upgrade-921401
name: kubernetes-upgrade-921401
- context:
cluster: running-upgrade-136818
user: running-upgrade-136818
name: running-upgrade-136818
- context:
cluster: stopped-upgrade-937293
user: stopped-upgrade-937293
name: stopped-upgrade-937293
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-921401
user:
client-certificate: /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/kubernetes-upgrade-921401/client.crt
client-key: /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/kubernetes-upgrade-921401/client.key
- name: running-upgrade-136818
user:
client-certificate: /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/running-upgrade-136818/client.crt
client-key: /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/running-upgrade-136818/client.key
- name: stopped-upgrade-937293
user:
client-certificate: /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/stopped-upgrade-937293/client.crt
client-key: /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/stopped-upgrade-937293/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-589300

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589300"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589300"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589300"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589300"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589300"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589300"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589300"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589300"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589300"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589300"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589300"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589300"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589300"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589300"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589300"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589300"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589300"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589300"

                                                
                                                
----------------------- debugLogs end: false-589300 [took: 3.357300207s] --------------------------------
helpers_test.go:175: Cleaning up "false-589300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-589300
--- PASS: TestNetworkPlugins/group/false (3.70s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-937293
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-937293: (1.295608439s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (38.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-589300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-589300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (38.634201677s)
--- PASS: TestNetworkPlugins/group/auto/Start (38.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (41.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-589300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1202 16:13:12.972014  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-298630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-589300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (41.037524496s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (41.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-589300 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-589300 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-llr7v" [426a5026-fe80-4bd9-bc69-8dc399934a43] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-llr7v" [426a5026-fe80-4bd9-bc69-8dc399934a43] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.004392308s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-589300 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-589300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-589300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (51.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-589300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-589300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (51.005509652s)
--- PASS: TestNetworkPlugins/group/calico/Start (51.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-8ckdc" [531dcad5-83bf-4998-8364-9a6197fec1c0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004193984s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-589300 "pgrep -a kubelet"
I1202 16:13:45.392249  268099 config.go:182] Loaded profile config "kindnet-589300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-589300 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xf95q" [ccb4653b-2cc5-4f6e-a09b-e67a8bba3a00] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-xf95q" [ccb4653b-2cc5-4f6e-a09b-e67a8bba3a00] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003626337s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-589300 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-589300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-589300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (49.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-589300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-589300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (49.878126808s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (49.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (43.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-589300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-589300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (43.778159275s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (43.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-mtpsz" [c0c2742b-7b83-4eb4-acf2-c924c54956bd] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-mtpsz" [c0c2742b-7b83-4eb4-acf2-c924c54956bd] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004294409s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (50.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-589300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-589300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (50.989857417s)
--- PASS: TestNetworkPlugins/group/flannel/Start (50.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-589300 "pgrep -a kubelet"
I1202 16:14:34.687017  268099 config.go:182] Loaded profile config "calico-589300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-589300 replace --force -f testdata/netcat-deployment.yaml
I1202 16:14:35.242956  268099 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1202 16:14:35.500888  268099 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-dbscg" [73cfb49f-42e6-4ec3-9cff-b058273dcfa2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-dbscg" [73cfb49f-42e6-4ec3-9cff-b058273dcfa2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004794009s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-589300 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-589300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-589300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-589300 "pgrep -a kubelet"
I1202 16:14:46.818106  268099 config.go:182] Loaded profile config "custom-flannel-589300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-589300 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-wpjks" [485c751e-4dc5-49cb-8068-a66e7a9817ca] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-wpjks" [485c751e-4dc5-49cb-8068-a66e7a9817ca] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.004490126s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-589300 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-589300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-589300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-589300 "pgrep -a kubelet"
I1202 16:15:00.858212  268099 config.go:182] Loaded profile config "enable-default-cni-589300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-589300 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-9tjlg" [189f5f46-abaf-4b48-87ea-f38f4d527f18] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-9tjlg" [189f5f46-abaf-4b48-87ea-f38f4d527f18] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003955902s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (35.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-589300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-589300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (35.357277921s)
--- PASS: TestNetworkPlugins/group/bridge/Start (35.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-589300 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-589300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-589300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (54.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-380588 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-380588 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (54.943334836s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (54.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-vmb2c" [02217880-bf64-4e35-b6de-73e8d8e712d3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004207118s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-589300 "pgrep -a kubelet"
I1202 16:15:28.321230  268099 config.go:182] Loaded profile config "flannel-589300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-589300 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-z5c5z" [cf195b24-4a18-4909-935d-356bbf4ebe71] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-z5c5z" [cf195b24-4a18-4909-935d-356bbf4ebe71] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003372438s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (47.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-534842 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-534842 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (47.547005717s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (47.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-589300 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-589300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-589300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-589300 "pgrep -a kubelet"
E1202 16:15:43.736405  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-310311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1202 16:15:44.023116  268099 config.go:182] Loaded profile config "bridge-589300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-589300 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jvc7s" [747dfdd0-da8e-454e-acf1-7238d2dd9eb1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-jvc7s" [747dfdd0-da8e-454e-acf1-7238d2dd9eb1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.005135476s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-589300 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-589300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-589300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (42.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-046271 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-046271 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (42.626443493s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (42.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-380588 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [44dc2786-babf-4e74-89be-27670ac97906] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [44dc2786-babf-4e74-89be-27670ac97906] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.004529313s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-380588 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (38.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-806420 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-806420 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (38.757741297s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (38.76s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-534842 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [8068757f-9d6b-462a-901f-ba1d7b811746] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [8068757f-9d6b-462a-901f-ba1d7b811746] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003519484s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-534842 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-380588 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-380588 --alsologtostderr -v=3: (16.188891932s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-534842 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-534842 --alsologtostderr -v=3: (16.334062438s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-380588 -n old-k8s-version-380588
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-380588 -n old-k8s-version-380588: exit status 7 (89.121416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-380588 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (51.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-380588 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-380588 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (50.549935074s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-380588 -n old-k8s-version-380588
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (51.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-046271 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [20ecb04e-b6d3-4f0a-802c-8042502b49f9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [20ecb04e-b6d3-4f0a-802c-8042502b49f9] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004839359s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-046271 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-534842 -n no-preload-534842
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-534842 -n no-preload-534842: exit status 7 (95.516857ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-534842 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (44.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-534842 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-534842 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (44.382070872s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-534842 -n no-preload-534842
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (44.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-806420 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [5fb97362-c18a-4a19-bcc3-d79520c4276f] Pending
helpers_test.go:352: "busybox" [5fb97362-c18a-4a19-bcc3-d79520c4276f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [5fb97362-c18a-4a19-bcc3-d79520c4276f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003906998s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-806420 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (17.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-046271 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-046271 --alsologtostderr -v=3: (17.851691658s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (17.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (17.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-806420 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-806420 --alsologtostderr -v=3: (17.137598358s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (17.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-046271 -n embed-certs-046271
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-046271 -n embed-certs-046271: exit status 7 (81.98199ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-046271 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (49.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-046271 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-046271 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (48.88797977s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-046271 -n embed-certs-046271
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (49.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-806420 -n default-k8s-diff-port-806420
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-806420 -n default-k8s-diff-port-806420: exit status 7 (99.700066ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-806420 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-806420 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
E1202 16:17:24.681128  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/addons-141726/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-806420 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (50.733078674s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-806420 -n default-k8s-diff-port-806420
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-mwmcm" [4a0441b6-699b-4b02-a86a-76b28b735c51] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.009580934s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-6hz4c" [71f568fa-e1cc-4595-b4a4-74dfc6e54a71] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003619512s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-mwmcm" [4a0441b6-699b-4b02-a86a-76b28b735c51] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004195302s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-380588 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-6hz4c" [71f568fa-e1cc-4595-b4a4-74dfc6e54a71] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003724451s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-534842 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-380588 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-534842 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (29.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-682353 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-682353 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (29.182722454s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (29.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-lwffd" [e21d932d-5db6-4487-98a0-524c1f3e89be] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003015266s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-lwffd" [e21d932d-5db6-4487-98a0-524c1f3e89be] Running
E1202 16:18:12.971283  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/functional-298630/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004245922s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-046271 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-q97zr" [ca45743c-72e2-4121-81ae-644834d5eb2d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00335364s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-046271 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-q97zr" [ca45743c-72e2-4121-81ae-644834d5eb2d] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003817411s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-806420 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.67s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-682353 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-682353 --alsologtostderr -v=3: (2.674752531s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-806420 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-682353 -n newest-cni-682353
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-682353 -n newest-cni-682353: exit status 7 (81.025289ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-682353 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (11.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-682353 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-682353 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (11.203397282s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-682353 -n newest-cni-682353
E1202 16:18:38.520331  268099 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/auto-589300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (11.55s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-682353 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    

Test skip (33/415)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0.67
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
262 TestGvisorAddon 0
284 TestImageBuild 0
285 TestISOImage 0
349 TestChangeNoneUser 0
352 TestScheduledStopWindows 0
354 TestSkaffold 0
380 TestNetworkPlugins/group/kubenet 3.6
388 TestNetworkPlugins/group/cilium 3.93
395 TestStartStop/group/disable-driver-mounts 0.18
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.67s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1202 15:15:39.397059  268099 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
W1202 15:15:40.046324  268099 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
W1202 15:15:40.063293  268099 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
aaa_download_only_test.go:113: No preload image
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.67s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-589300 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-589300

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-589300

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-589300

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-589300

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-589300

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-589300

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-589300

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-589300

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-589300

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-589300

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589300"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589300"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589300"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-589300

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589300"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589300"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-589300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-589300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-589300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-589300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-589300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-589300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-589300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-589300" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589300"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589300"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589300"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589300"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589300"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-589300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-589300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-589300" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589300"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589300"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589300"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589300"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589300"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 02 Dec 2025 16:09:02 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-921401
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 02 Dec 2025 16:08:04 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: running-upgrade-136818
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 02 Dec 2025 16:08:08 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-937293
contexts:
- context:
cluster: kubernetes-upgrade-921401
user: kubernetes-upgrade-921401
name: kubernetes-upgrade-921401
- context:
cluster: running-upgrade-136818
user: running-upgrade-136818
name: running-upgrade-136818
- context:
cluster: stopped-upgrade-937293
user: stopped-upgrade-937293
name: stopped-upgrade-937293
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-921401
user:
client-certificate: /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/kubernetes-upgrade-921401/client.crt
client-key: /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/kubernetes-upgrade-921401/client.key
- name: running-upgrade-136818
user:
client-certificate: /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/running-upgrade-136818/client.crt
client-key: /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/running-upgrade-136818/client.key
- name: stopped-upgrade-937293
user:
client-certificate: /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/stopped-upgrade-937293/client.crt
client-key: /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/stopped-upgrade-937293/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-589300

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589300"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589300"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589300"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589300"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589300"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589300"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589300"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589300"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589300"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589300"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589300"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589300"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589300"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589300"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589300"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589300"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589300"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589300"

                                                
                                                
----------------------- debugLogs end: kubenet-589300 [took: 3.437820617s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-589300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-589300
--- SKIP: TestNetworkPlugins/group/kubenet (3.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-589300 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-589300

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-589300

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-589300

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-589300

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-589300

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-589300

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-589300

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-589300

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-589300

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-589300

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589300"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589300"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589300"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-589300

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589300"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589300"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-589300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-589300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-589300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-589300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-589300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-589300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-589300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-589300" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589300"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589300"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589300"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589300"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589300"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-589300

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-589300

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-589300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-589300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-589300

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-589300

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-589300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-589300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-589300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-589300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-589300" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589300"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589300"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589300"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589300"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589300"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 02 Dec 2025 16:09:02 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-921401
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 02 Dec 2025 16:08:04 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: running-upgrade-136818
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22021-264555/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 02 Dec 2025 16:08:08 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-937293
contexts:
- context:
cluster: kubernetes-upgrade-921401
user: kubernetes-upgrade-921401
name: kubernetes-upgrade-921401
- context:
cluster: running-upgrade-136818
user: running-upgrade-136818
name: running-upgrade-136818
- context:
cluster: stopped-upgrade-937293
user: stopped-upgrade-937293
name: stopped-upgrade-937293
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-921401
user:
client-certificate: /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/kubernetes-upgrade-921401/client.crt
client-key: /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/kubernetes-upgrade-921401/client.key
- name: running-upgrade-136818
user:
client-certificate: /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/running-upgrade-136818/client.crt
client-key: /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/running-upgrade-136818/client.key
- name: stopped-upgrade-937293
user:
client-certificate: /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/stopped-upgrade-937293/client.crt
client-key: /home/jenkins/minikube-integration/22021-264555/.minikube/profiles/stopped-upgrade-937293/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-589300

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589300"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589300"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589300"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589300"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589300"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589300"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589300"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589300"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589300"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589300"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589300"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589300"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589300"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589300"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589300"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589300"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589300"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-589300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589300"

                                                
                                                
----------------------- debugLogs end: cilium-589300 [took: 3.760081986s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-589300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-589300
--- SKIP: TestNetworkPlugins/group/cilium (3.93s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-904481" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-904481
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard